Objectives: Spectro-temporal ripple tests are commonly used in cochlear implant (CI) research as language-independent indicators of speech recognition (in noise) or as stand-alone tests. Test... Show moreObjectives: Spectro-temporal ripple tests are commonly used in cochlear implant (CI) research as language-independent indicators of speech recognition (in noise) or as stand-alone tests. Test-retest reliability of these tests has been scarcely documented. We evaluated the test-retest reliability of spectral-temporally modulated ripple test (SMRT) and spectro-temporal ripple for investigating processor effectiveness (STRIPES) and correlated their findings to the Dutch/Flemish Matrix speech-in-noise sentence test (MST) in CI recipients. This is the first time spectro-temporal ripple tests are correlated to an MST.Design: Take-home data from 15 participants over 2 test days were analyzed. Participants were fitted with their clinical speech encoding strategy (Advanced Bionics HiRes Optima) or a 14-channel non-steered monopolar strategy. Test-retest reliability was calculated through intraclass correlation coefficients and visualized through Bland Altman plots. Association of the spectro-temporal ripple tests with the MST was evaluated through linear regression analysis.Results: The SMRT and STRIPES possessed a similarly rated “good” test-retest reliability (SMRT: ICC = 0.81, confidence interval = 0.67 to 0.92; STRIPES: ICC = 0.87, confidence interval = 0.76 to 0.95) and an identical linear relationship to speech recognition in noise (SMRT: R2 = 0.28, p = 0.04; STRIPES: R2 = 0.28, p = 0.04). Both tests revealed a stable variability between session 1 and 2 outcome scores on Bland Altman plots.Conclusion: On the basis of our data, both spectro-temporal ripple tests possess similar test-retest reliability and a similar association with the MST. The SMRT and STRIPES can therefore both be used equally well as a quick indicator of across-listener differences in speech recognition in noise in CI recipients. Show less
Objectives: Many studies have assessed the performance of individuals with cochlear implants (CIs) with electrically evoked compound action potentials (eCAPs). These eCAP-based studies have focused... Show moreObjectives: Many studies have assessed the performance of individuals with cochlear implants (CIs) with electrically evoked compound action potentials (eCAPs). These eCAP-based studies have focused on the amplitude information of the response, without considering the temporal firing properties of the excited auditory nerve fibers (ANFs), such as neural latency and synchrony. These temporal features have been associated with neural health in animal studies and, consequently, could be of importance to clinical CI outcomes. With a deconvolution method, combined with a unitary response, the eCAP can be mathematically unraveled into the compound discharge latency distribution (CDLD). The CDLD reflects both the number and the temporal firing properties of excited ANFs. The present study aimed to determine to what extent the CDLD derived from intraoperatively recorded eCAPs is related to speech perception in individuals with CIs. Design: This retrospective study acquired data on monosyllabic word recognition scores and intraoperative eCAP amplitude growth functions from 124 adult patients with postlingual deafness that received the Advanced Bionics HiRes 90K device. The CDLD was determined for each recorded eCAP waveform by deconvolution. Each of the two Gaussian components of the CDLD was described by three parameters: the amplitude, the firing latency (the average latency of each component of the CDLD), and the variance of the CDLD components (an indication of the synchronicity of excited ANFs). Apart from these six CDLD parameters, the area under the CDLD curve (AUCD) and the slope of the AUCD growth function were determined as well. The AUCD was indicative of the total number of excited ANFs over time. The slope of the AUCD growth function indicated the increases in the number of excited ANFs with stimulus level. Associations between speech perception and each of these eight CDLD-related parameters were investigated with linear mixed modeling. Results: In individuals with CIs, larger amplitudes of the two CDLD components, greater AUCD, and steeper slopes of the AUCD growth function were all significantly associated with better speech perception. In addition, a smaller latency variance in the early CDLD component, but not in the late, was significantly associated with better speech recognition scores. Speech recognition was not significantly dependent on CDLD latencies. The AUCD and the slope of the AUCD growth function provided a similar explanation of the variance in speech perception (R-2) as the eCAP amplitude, the slope of the amplitude growth function, the amplitude, and variance of the first CDLD component. Conclusion: The results demonstrate that both the number and the neural synchrony of excited ANFs, as revealed by CDLDs, are indicative of postimplantation speech perception in individuals with a CI. Because the CDLD-based parameters yielded a higher significance than the eCAP amplitude or the AGF slope, the authors conclude that CDLDs can serve as a clinical predictor of the survival of ANFs and that they have predictive value for postoperative speech perception performance. Thus, it would be worthwhile to incorporate the CDLD into eCAP measures in future clinical applications. Show less
Background: The refractory recovery function (RRF) measures the electrically evoked compound action potential (eCAP) in response to a second pulse (probe) after masking by a first pulse (masker).... Show moreBackground: The refractory recovery function (RRF) measures the electrically evoked compound action potential (eCAP) in response to a second pulse (probe) after masking by a first pulse (masker). This RRF is usually used to assess the refractory properties of the electrically stimulated auditory nerve (AN) by recording the eCAP amplitude as a function of the masker probe interval. Instead of assessing eCAP amplitudes only, recorded waveforms can also be described as a combination of a short-latency component (S-eCAP) and a long-latency component (L-eCAP). It has been suggested that these two components originate from two different AN fiber populations with differing refractory properties. The main objective of this study was to explore whether the refractory characteristics revealed by S-eCAP, L-eCAP, and the raw eCAP (R-eCAP) differ from each other. For clinical relevance, we compared these refractory properties between children and adults and examined whether they are related to cochlear implant (CI) outcomes.Design: In this retrospective study, the raw RRF (R-RRF) was obtained from 121 Hi-Focus Mid-Scala or 1 J cochlear implant (Advanced Bionics, Valencia, CA) recipients. Each R-eCAP of the R-RRF was split into an S-eCAP and an L-eCAP using deconvolution to produce two new RRFs: S-RRF and L-RRF. The refractory properties were characterized by fitting an exponential decay function with three parameters: the absolute refractory period (T); the saturation level (A); and the speed of recovery from nerve refractoriness ( Tau), i.e., a measure of the relative refractory period. We compared the parameters of the R-RRF (R T , R A , R Tau) with those obtained from the S-RRF (S T , S A , S Tau) and L-RRF (L T , L A , L Tau) and investigated whether these parameters differed between children and adults. In addition, we examined the associations between these parameters and speech perception in adults with CI. Linear mixed modeling was used for the analyses.Results: We found that T R was significantly longer than S T and L T , and S T was significantly longer than L T . R A was significantly larger than S A and L A , and S A was significantly larger than L A . Also, S Tau was significantly longer in comparison to R Tau and L Tau, but no significant difference was found between R Tau and L Tau. Children presented a significantly larger S A and L A and a shorter R T in comparison to adults. Shorter S Tau was significantly associated with better speech perception in adult CI recipients, but other parameters were not.Conclusion: We demonstrated that the two components of the eCAP have different refractory properties and that these also differ from those of the R-eCAP. In comparison with the R-eCAP, the refractory properties derived from the S-eCAP and L-eCAP can reveal additional clinical implications in terms of the refractory difference between children and adults as well as speech performance after implantation. Thus, it is worthwhile considering the two components of the eCAP in the future when assessing the clinical value of the auditory refractory properties. Show less
Dit proefschrift is gericht op het onderzoeken van verschillende aspecten van de huidige revalidatie rondomgehoorverlies. In de verschillende hoofdstukken wordt ingegaan op de selectiecriteria voor... Show moreDit proefschrift is gericht op het onderzoeken van verschillende aspecten van de huidige revalidatie rondomgehoorverlies. In de verschillende hoofdstukken wordt ingegaan op de selectiecriteria voor volwassen CI-kandidaten (hoofdstuk 2 en 3), de taalontwikkeling bij kinderen met een ABI (hoofdstuk 4), enverschillende ontwikkelingsuitkomsten na revalidatie voor kinderen met gehoorverlies, zoals het sociaalemotioneel functioneren (hoofdstuk 5) en het opleidingsniveau (hoofdstuk 6). Show less
The treatment of severe to profound sensorineural hearing loss has rapidly evolved in the last several decades. The cochlear implant (CI) device, which forms an interface between a sound signal and... Show moreThe treatment of severe to profound sensorineural hearing loss has rapidly evolved in the last several decades. The cochlear implant (CI) device, which forms an interface between a sound signal and the auditory nerve fibers (ANFs) of the deaf ear, is by now an accepted approach of rehabilitation for profoundly deaf individuals and generally achieves high performance in terms of speech perception. However, effectiveness still widely varies from person to person. Therefore, there is a continued impetus for further progress in CIs. In this thesis, we developed new applications of objective measures in modern CIs regarding electrically evoked compound action potential (eCAP) recording and electrical field imaging (EFI). With the development of an iterative deconvolution model, this thesis focuses on extracting the temporal firing properties of excited ANFs in human eCAP and evaluating their potential implications for clinical practice. In addition, this thesis describes an attempt to intra-operatively assess the placement of the electrode array within the cochlea based on impedance measurements. Show less
Objectives: The primary objective of this study is to identify the biographic, audiologic, and electrode position factors that influence speech perception performance in adult cochlear implant (CI)... Show moreObjectives: The primary objective of this study is to identify the biographic, audiologic, and electrode position factors that influence speech perception performance in adult cochlear implant (CI) recipients implanted with a device from a single manufacturer. The secondary objective is to investigate the independent association of the type of electrode (precurved or straight) with speech perception. Design: In a cross-sectional study design, speech perception measures and ultrahigh-resolution computed tomography scans were performed in 129 experienced CI recipients with a postlingual onset of hearing loss. Data were collected between December 2016 and January 2018 in the Radboud University Medical Center, Nijmegen, the Netherlands. The participants received either a precurved electrode (N = 85) or a straight electrode (N = 44), all from the same manufacturer. The biographic variables evaluated were age at implantation, level of education, and years of hearing loss. The audiometric factors explored were preoperative and postoperative pure-tone average residual hearing and preoperative speech perception score. The electrode position factors analyzed, as measured from images obtained with the ultrahigh-resolution computed tomography scan, were the scalar location, angular insertion depth of the basal and apical electrode contacts, and the wrapping factor (i.e., electrode-to-modiolus distance), as well as the type of electrode used. These 11 variables were tested for their effect on three speech perception outcomes: consonant-vowel-consonant words in quiet tests at 50 dB SPL (CVC50) and 65 dB SPL (CVC65), and the digits-in-noise test. Results: A lower age at implantation was correlated with a higher CVC50 phoneme score in the straight electrode group. Other biographic variables did not correlate with speech perception. Furthermore, participants implanted with a precurved electrode and who had poor preoperative hearing thresholds performed better in all speech perception outcomes than the participants implanted with a straight electrode and relatively better preoperative hearing thresholds. After correcting for biographic factors, audiometric variables, and scalar location, we showed that the precurved electrode led to an 11.8 percentage points (95% confidence interval: 1.4-20.4%; p = 0.03) higher perception score for the CVC50 phonemes compared with the straight electrode. Furthermore, contrary to our initial expectations, the preservation of residual hearing with the straight electrode was poor, as the median preoperative and the postoperative residual hearing thresholds for the straight electrode were 88 and 122 dB, respectively. Conclusions: Cochlear implantation with a precurved electrode results in a significantly higher speech perception outcome, independent of biographic factors, audiometric factors, and scalar location. Show less
Objective: By discussing the design, findings, strengths, and weaknesses of available studies investigating the influence of angular insertion depth on speech perception, we intend to summarize the... Show moreObjective: By discussing the design, findings, strengths, and weaknesses of available studies investigating the influence of angular insertion depth on speech perception, we intend to summarize the current status of evidence; and using evidence based conclusions, possibly contribute to the determination of the optimal cochlear implant (CI) electrode position.Data Sources: Our search strategy yielded 10,877 papers. PubMed, Ovid EMBASE, Web of Science, and the Cochrane Library were searched up to June 1, 2018. Both keywords and free-text terms, related to patient population, predictive factor, and outcome measurements were used. There were no restrictions in languages or year of publication.Study Selection: Seven articles were included in this systematic review. Articles eligible for inclusion: (a) investigated cochlear implantation of any CI system in adults with post-lingual onset of deafness and normal cochlear anatomy; (b) investigated the relationship between angular insertion depth and speech perception; (c) measured angular insertion depth on imaging; and (d) measured speech perception at, or beyond 1-year post-activation.Data Extraction and Synthesis: In included studies; quality was judged low-to-moderate and risk of bias, evaluated using a Quality-in-Prognostic-Studies-tool (QUIPS), was high. Included studies were too heterogeneous to perform meta-analyses, therefore, effect estimates of the individual studies are presented. Six out of seven included studies found no effect of angular insertion depth on speech perception.Conclusion: All included studies are characterized by methodological flaws, and therefore, evidence-based conclusions regarding the influence of angular insertion depth cannot be drawn to date. Show less
The ability to learn rules is at the heart of the ability to learn language. This thesis is a collection of papers tackling rule learning from various perspectives and domains – including... Show moreThe ability to learn rules is at the heart of the ability to learn language. This thesis is a collection of papers tackling rule learning from various perspectives and domains – including visual, auditory, and speech domains – in both infants and adults. Using both simple XYX-, XXY-, or XYY-type rules, and more complex Lindenmayer grammars, we were able to gain insights into the rule learning processes of young infants and of adults. While we were unsuccessful in attempted replications and extensions of previous studies, it was precisely these failures that helped to provide a more nuanced picture of rule learning: even the simplest type of rule learning is far from straightforward. For infants, we find evidence for a repetition bias in both the visual and speech domain that is difficult to overcome, while for adults we show that the learning environment – the task used, the instructions, types of testing stimuli – are all highly influential in determining whether a simple rule can be learned or not. Furthermore, by studying patterns found in babbling we were able to hypothesize for the first time about the parallels between production and perceptual abilities with respect to rule learning. Show less
Speech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and... Show moreSpeech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and humans to discriminate two pairs of edited speech sounds that varied either along one dimension (vowel or speaker sex) or along two dimensions (vowel and speaker sex). Sounds could be memorized individually or categorized based on one dimension or by integrating or combining both dimensions. Once training was completed, we tested generalization to new speech sounds that were either more extreme, more ambiguous (i.e., close to the category boundary), or within-category intermediate between the trained sounds. Both humans and zebra finches learned the one-dimensional stimulus-response mappings faster than the two-dimensional mappings. Humans performed higher on the trained, extreme and within-category intermediate test-sounds than on the ambiguous ones. Some individual birds also did so, but most performed higher on the trained exemplars than on the extreme, within-category intermediate and ambiguous test-sounds. These results suggest that humans rely on rule learning to form categories and show poor performance when they cannot apply a rule. Birds rely mostly on exemplar-based memory with weak evidence for rule learning. Show less
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent... Show moreSpeech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners’ native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on percep- tion of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of pro- cessing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Show less
Beek, F.B. van der; Briaire, J.J.; Marel, K.S. van der; Verbist, B.M.; Frijns, J.H.M. 2016
Tonal bilinguals of two closely related Chinese dialects handle two tonal systems in their mind; their two vocabularies are from closely related dialects; and they write translation equivalents... Show moreTonal bilinguals of two closely related Chinese dialects handle two tonal systems in their mind; their two vocabularies are from closely related dialects; and they write translation equivalents with common Chinese characters. Their unique language situation makes their mind special. This thesis investigates these tonal bilinguals’ lexical processing mechanism, studying how they produce and understand words. Their situation provides a valuable test case for a few important theories on bilingual lexical access. Bilingual lexical processing is flexible, influenced by the task and language mode. Moreover, compared with tonal monolinguals, these tonal bilinguals not only showed classical advantages in executive control, but sometimes even perform faster with lexical tasks. The structure of the bilingual lexicon can cause important differences in bilingual lexical processing and the corresponding functions of executive control. Show less
Spoken communication involves transmission of a message which takes physical form in acoustic waves. Within any given language, acoustic cues pattern in language-specific ways along language... Show moreSpoken communication involves transmission of a message which takes physical form in acoustic waves. Within any given language, acoustic cues pattern in language-specific ways along language-specific acoustic dimensions to create speech sound contrasts. These cues are utilized by listeners to discriminate between possible messages intended by the speaker. It is well documented that individual listeners attend to different acoustic cues in different ways. For example, adult second-language (L2) learners often have trouble distinguishing certain L2 speech contrasts. Yet, the question of how listeners come to utilise certain cues and not others for discrimination is not yet well understood. The relationship between this continuous and inherently noisy acoustic signal and the discrete nature of the underlying messages forms the basis for this thesis. I used electrophysiological (EEG) and behavioural measures to investigate how allophonic tonal variants and sub-phonemic features are processed during Mandarin and Dutch speech production, visual processing of written words and reading aloud. In addition, using the visual world eyetracking paradigm, I investigated how the degree of variation (statistical noise) in the acoustic signal affects perception of Cantonese segment and tone contrasts. Show less