Browsing by Subject "SPEECH"

Sort by: Order: Results:

Now showing items 1-20 of 41
  • Wilenius, Juha; Lehtinen, Henri; Paetau, Ritva; Salmelin, Riitta; Kirveskari, Erika (2018)
    Objective The intracarotid amobarbital procedure (IAP) is the current "gold standard" in the preoperative assessment of language lateralization in epilepsy surgery candidates. It is, however, invasive and has several limitations. Here we tested a simple noninvasive language lateralization test performed with magnetoencephalography (MEG). Methods We recorded auditory MEG responses to pairs of vowels and pure tones in 16 epilepsy surgery candidates who had undergone IAP. For each individual, we selected the pair of planar gradiometer sensors with the strongest N100m response to vowels in each hemisphere and -from the vector sum of signals of this gradiometer pair-calculated the vowel/tone amplitude ratio in the left (L) and right (R) hemisphere and, subsequently, the laterality index: LI = (L-R)/(L+R). In addition to the analysis using a single sensor pair, an alternative analysis was performed using averaged responses over 18 temporal sensor pairs in both hemispheres. Results The laterality index did not correlate significantly with the lateralization data obtained from the IAP. However, an MEG pattern of stronger responses to vowels than tones in the left hemisphere and stronger responses to tones than vowels in the right hemisphere was associated with left-hemispheric language dominance in the IAP in all the six patients who showed this pattern. This results in a specificity of 100% and a sensitivity of 67% of this MEG pattern in predicting left-hemispheric language dominance (p = 0.01, Fisher's exact test). In the analysis using averaged responses over temporal channels, one additional patient who was left-dominant in IAP showed this particular MEG pattern, increasing the sensitivity to 78% (p = 0.003). Significance This simple MEG paradigm shows promise in feasibly and noninvasively confirming left-hemispheric language dominance in epilepsy surgery candidates. It may aid in reducing the need for the IAP, if the results are confirmed in larger patient samples.
  • Raaska, Hanna; Elovainio, Marko; Sinkkonen, Jari; Stolt, Suvi; Jalonen, Iina; Matomaki, Jaakko; Makipaa, Sanna; Lapinleimu, Helena (2013)
  • Zhdanov, Andrey; Nurminen, Jussi; Baess, Pamela; Hirvenkari, Lotta; Jousmaki, Veikko; Makela, Jyrki P.; Mandel, Anne; Meronen, Lassi; Hari, Riitta; Parkkonen, Lauri (2015)
    Hyperscanning Most neuroimaging studies of human social cognition have focused on brain activity of single subjects. More recently, "two-person neuroimaging" has been introduced, with simultaneous recordings of brain signals from two subjects involved in social interaction. These simultaneous "hyperscanning" recordings have already been carried out with a spectrum of neuroimaging modalities, such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and functional near-infrared spectroscopy (fNIRS). Dual MEG Setup We have recently developed a setup for simultaneous magnetoencephalographic (MEG) recordings of two subjects that communicate in real time over an audio link between two geographically separated MEG laboratories. Here we present an extended version of the setup, where we have added a video connection and replaced the telephone-landline-based link with an Internet connection. Our setup enabled transmission of video and audio streams between the sites with a one-way communication latency of about 130 ms. Our software that allows reproducing the setup is publicly available. Validation We demonstrate that the audiovisual Internet-based link can mediate real-time interaction between two subjects who try to mirror each others' hand movements that they can see via the video link. All the nine pairs were able to synchronize their behavior. In addition to the video, we captured the subjects' movements with accelerometers attached to their index fingers; we determined from these signals that the average synchronization accuracy was 215 ms. In one subject pair we demonstrate inter-subject coherence patterns of the MEG signals that peak over the sensorimotor areas contralateral to the hand used in the task.
  • Murtola, Tiina; Malinen, Jarmo; Geneid, Ahmed; Alku, Paavo (2019)
    A multichannel dataset comprising high-speed videoendoscopy images, and electroglottography and free-field microphone signals, was used to investigate phonation onsets in vowel production. Use of the multichannel data enabled simultaneous analysis of the two main aspects of phonation, glottal area, extracted from the high-speed videoendoscopy images, and glottal flow, estimated from the microphone signal using glottal inverse filtering. Pulse-wise parameterization of the glottal area and glottal flow indicate that there is no single dominant way to initiate quasi-stable phonation. The trajectories of fundamental frequency and normalized amplitude quotient, extracted from glottal area and estimated flow, may differ markedly during onsets. The location and steepness of the amplitude envelopes of the two signals were observed to be closely related, and quantitative analysis supported the hypothesis that glottal area and flow do not carry essentially different amplitude information during vowel onsets. Linear models were used to predict the phonation onset times from the characteristics of the subsequent steady phonation. The phonation onset time of glottal area was found to have good predictability from a combination of the fundamental frequency and the normalized amplitude quotient of the glottal flow, as well as the gender of the speaker. For the phonation onset time of glottal flow, the best linear model was obtained using the fundamental frequency and the normalized amplitude quotient of the glottal flow as predictors.
  • Kallio, Heini; Suni, Antti; Šimko, Juraj; Vainio, Martti (2020)
    Prosodic characteristics, such as lexical and phrasal stress, are one of the most challenging features for second language (L2) speakers to learn. The ability to quantify language learners' proficiency in terms of prosody can be of use to language teachers and improve the assessment of L2 speaking skills. Automatic assessment, however, requires reliable automatic analyses of prosodic features that allow for the comparison between the productions of L2 speech and reference samples. In this paper we investigate whether signal-based syllable prominence can be used to predict the prosodic competence of Finnish learners of Swedish. Syllable-level prominence was estimated for 180 L2 and 45 native (L1) utterances by a continuous wavelet transform analysis using combinations of f(0), energy, and duration. The L2 utterances were graded by four expert assessors using the revised CEFR scale for prosodic features. Correlations of prominence estimates for L2 utterances with estimates for L1 utterances and linguistic stress patterns were used as a measure of prosodic proficiency of the L2 speakers. The results show that the level of agreement conceptualized in this way correlates significantly with the assessments of expert raters, providing strong support for the use of the wavelet-based prominence estimation techniques in computer-assisted assessment of L2 speaking skills.
  • Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo (2013)
  • Jansson-Verkasalo, Eira; Ruusuvirta, Timo; Huotilainen, Minna; Alku, Paavo; Kushnerenko, Elena; Suominen, Kalervo; Rytky, Seppo; Luotonen, Mirja; Kaukola, Tuula; Tolonen, Uolevi; Hallman, Mikko (2010)
  • Kuuluvainen, Soila; Leminen, Alina; Kujala, Teija (2016)
    Children's obligatory auditory event-related potentials (ERPs) to speech and nonspeech sounds have been shown to associate with reading performance in children at risk or with dyslexia and their controls. However, very little is known of the cognitive processes these responses reflect. To investigate this question, we recorded ERPs to semisynthetic syllables and their acoustically matched nonspeech counterparts in 63 typically developed preschoolers, and assessed their verbal skills with an extensive set of neurocognitive tests. P1 and N2 amplitudes were larger for nonspeech than speech stimuli, whereas the opposite was true for N4. Furthermore, left-lateralized P1s were associated with better phonological and prereading skills, and larger P1s to nonspeech than speech stimuli with poorer verbal reasoning performance. Moreover, left-lateralized N2s, and equal-sized N4s to both speech and nonspeech stimuli were associated with slower naming. In contrast, children with equal-sized N2 amplitudes at left and right scalp locations, and larger N4s for speech than nonspeech stimuli, performed fastest. We discuss the possibility that children’s ERPs reflect not only neural encoding of sounds, but also sound quality processing, memory-trace construction, and lexical access. The results also corroborate previous findings that speech and nonspeech sounds are processed by at least partially distinct neural substrates.
  • Dawson, Caitlin; Tervaniemi, Mari; Aalto, Daniel (2018)
    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for f(o) or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.
  • Virtala, Paula Maarit; Partanen, Eino Juhani (2018)
    Music and musical activities are often a natural part of parenting. As accumulating evidence shows, music can promote auditory and language development in infancy and early childhood. It may even help to support auditory and language skills in infants whose development is compromised by heritable conditions, like the reading deficit dyslexia, or by environmental factors, such as premature birth. For example, infants born to dyslexic parents can have atypical brain responses to speech sounds and subsequent challenges in language development. Children born very preterm, in turn, have an increased likelihood of sensory, cognitive, and motor deficits. To ameliorate these deficits, we have developed early interventions focusing on music. Preliminary results of our ongoing longitudinal studies suggest that music making and parental singing promote infants' early language development and auditory neural processing. Together with previous findings in the field, the present studies highlight the role of active, social music making in supporting auditory and language development in at-risk children and infants. Once completed, the studies will illuminate both risk and protective factors in development and offer a comprehensive model of understanding the promises of music activities in promoting positive developmental outcomes during the first years of life.
  • Parviainen, Tiina; Helenius, Päivi; Salmelin, Riitta (2019)
    Auditory cortex in each hemisphere shows preference to sounds from the opposite hemifield in the auditory space. Besides this contralateral dominance, the auditory cortex shows functional and structural lateralization, presumably influencing the features of subsequent auditory processing. Children have been shown to differ from adults in the hemispheric balance of activation in higher-order auditory based tasks. We studied, first, whether the contralateral dominance can be detected in 7- to 8-year-old children and, second, whether the response properties of auditory cortex in children differ between hemispheres. Magnetoencephalography (MEG) responses to simple tones revealed adult-like contralateral preference that was, however, extended in time in children. Moreover, we found stronger emphasis towards mature response properties in the right than left hemisphere, pointing to faster maturation of the right-hemisphere auditory cortex. The activation strength of the child-typical prolonged response was significantly decreased with age, within the narrow age-range of the studied child population. Our results demonstrate that although the spatial sensitivity to the opposite hemifield has emerged by 7 years of age, the population-level neurophysiological response shows salient immature features, manifested particularly in the left hemisphere. The observed functional differences between hemispheres may influence higher-level processing stages, for example, in language function.
  • Tuomenoksa, Asta; Pajo, Kati; Klippi, Anu (2016)
    This study applies conversation analysis to compare everyday conversation samples between a person with aphasia (PWA) and a familiar communication partner (CP) before and after intensive language-action therapy (ILAT). Our analysis concentrated on collaborative repair sequences with the assumption that impairment-focused therapy would translate into a change in the nature of trouble sources, which engender collaborative repair action typical of aphasic conversation. The most frequent repair initiation technique used by the CP was candidate understandings. The function of candidate understandings changed from addressing specific trouble sources pre-ILAT to concluding longer stretches of the PWA's talk post-ILAT. Alongside with these findings, we documented a clinically significant increase in the Western Aphasia Battery's aphasia quotient post-ILAT. Our results suggest that instead of mere frequency count of conversational behaviours, examining the type and function of repair actions might provide insight into therapy-related changes in conversation following impairment-focused therapy.
  • Vainio, Lari; Tiainen, Mikko; Tiippana, Kaisa; Vainio, Martti (2019)
    It has been shown recently that when participants are required to pronounce a vowel at the same time with the hand movement, the vocal and manual responses are facilitated when a front vowel is produced with forward-directed hand movements and a back vowel is produced with backward-directed hand movements. This finding suggests a coupling between spatial programing of articulatory tongue movements and hand movements. The present study revealed that the same effect can be also observed in relation to directional leg movements. The study suggests that the effect operates within the common directional processes of movement planning including at least tongue, hand and leg movements, and these processes might contribute sound-to-meaning mappings to the semantic concepts of 'forward' and 'backward'.
  • Tiainen, Mikko; Lukavsky, Jiri; Tiippana, Kaisa; Vainio, Martti; Šimko, Juraj; Felisberti, Fatima; Vainio, Lari (2017)
    We have recently shown in Finnish speakers that articulation of certain vowels and consonants has a systematic influence on simultaneous grasp actions as well as on forward and backward hand movements. Here we studied whether these effects generalize to another language, namely Czech. We reasoned that if the results generalized to another language environment, it would suggest that the effects arise through other processes than language-dependent semantic associations. Rather, the effects would be likely to arise through language-independent interactions between processes that plan articulatory gestures and hand movements. Participants were presented with visual stimuli specifying articulations to be uttered (e.g., A or I), and they were required to produce a manual response concurrently with the articulation. In Experiment 1 they responded with a precision or a power grip, whereas in Experiment 2 they responded with a forward or a backward hand movement. The grip congruency effect was fully replicated: the consonant [k] and the vowel [alpha] were associated with power grip responses, while the consonant [t] and the vowel [i] were associated with precision grip responses. The forward/backward congruency effect was replicated with vowels [alpha], [o], which were associated with backward movement and with [ i], which was associated with forward movement, but not with consonants [k] and [ t]. These findings suggest that the congruency effects mostly reflect interaction between processes that plan articulatory gestures and hand movements with an exception that the forward/backward congruency effect might only work with vowel articulation.
  • Liu, Xuanyao; Kanduri, Chakravarthi; Oikkonen, Jaana; Karma, Kai; Raijas, Pirre; Ukkola-Vuoti, Liisa; Teo, Yik-Ying; Jarvela, Irma (2016)
    Abilities related to musical aptitude appear to have a long history in human evolution. To elucidate the molecular and evolutionary background of musical aptitude, we compared genome-wide genotyping data (641 K SNPs) of 148 Finnish individuals characterized for musical aptitude. We assigned signatures of positive selection in a case-control setting using three selection methods: haploPS, XP-EHH and F-ST. Gene ontology classification revealed that the positive selection regions contained genes affecting inner-ear development. Additionally, literature survey has shown that several of the identified genes were known to be involved in auditory perception (e.g. GPR98, USH2A), cognition and memory (e.g. GRIN2B, IL1A, IL1B, RAPGEF5), reward mechanisms (RGS9), and song perception and production of songbirds (e.g. FOXP1, RGS9, GPR98, GRIN2B). Interestingly, genes related to inner-ear development and cognition were also detected in a previous genome-wide association study of musical aptitude. However, the candidate genes detected in this study were not reported earlier in studies of musical abilities. Identification of genes related to language development (FOXP1 and VLDLR) support the popular hypothesis that music and language share a common genetic and evolutionary background. The findings are consistent with the evolutionary conservation of genes related to auditory processes in other species and provide first empirical evidence for signatures of positive selection for abilities that contribute to musical aptitude.
  • Ylinen, Sari; Junttila, Katja; Laasonen, Marja; Iverson, Paul; Ahonen, Lauri; Kujala, Teija (2019)
    Dyslexia is characterized by poor reading skills, yet often also difficulties in second-language learning. The differences between native- and second-language speech processing and the establishment of new brain representations for spoken second language in dyslexia are not, however, well understood. We used recordings of the mismatch negativity component of event-related potential to determine possible differences between the activation of long-term memory representations for spoken native- and second-language word forms in Finnish-speaking 9-11-year-old children with or without dyslexia, studying English as their second language in school. In addition, we sought to investigate whether the bottleneck of dyslexic readers' second-language learning lies at the level of word representations or smaller units and whether the amplitude of mismatch negativity is correlated with native-language literacy and related skills. We found that the activation of brain representations for familiar second-language words, but not for second-language speech sounds or native-language words, was weaker in children with dyslexia than in typical readers. Source localization revealed that dyslexia was associated with weak activation of the right temporal cortex, which has been previously linked with word-form learning. Importantly, the amplitude of the mismatch negativity for familiar second-language words correlated with native-language literacy and rapid naming scores, suggesting a close link between second-language processing and these skills.
  • Välimaa, Taina; Kunnari, Sari; Laukkanen-Nevala, Paivi; Lonka, Eila; Natl Clinical Res Team (2018)
    BackgroundChildren with unilateral cochlear implants (CIs) may have delayed vocabulary development for an extended period after implantation. Bilateral cochlear implantation is reported to be associated with improved sound localization and enhanced speech perception in noise. This study proposed that bilateral implantation might also promote early vocabulary development. Knowledge regarding vocabulary growth and composition in children with bilateral CIs and factors associated with it may lead to improvements in the content of early speech and language intervention and family counselling. AimsTo analyse the growth of early vocabulary and its composition during the first year after CI activation and to investigate factors associated with vocabulary growth. Methods & ProceduresThe participants were 20 children with bilateral CIs (12 boys; eight girls; mean age at CI activation = 12.9 months). Vocabulary size was assessed with the Finnish version of the MacArthur Communicative Development Inventories (CDI) Infant Form and compared with normative data. Vocabulary composition was analysed in relation to vocabulary size. Growth curve modelling was implemented using a linear mixed model to analyse the effects of the following variables on early vocabulary growth: time, gender, maternal education, residual hearing with hearing aids, age at first hearing aid fitting and age at CI activation. Outcomes & ResultsDespite clear vocabulary growth over time, children with bilateral CIs lagged behind their age norms in receptive vocabulary during the first 12 months after CI activation. In expressive vocabulary, 35% of the children were able to catch up with their age norms, but 55% of the children lagged behind them. In receptive and expressive vocabularies of 1-20 words, analysis of different semantic categories indicated that social terms constituted the highest proportion. Nouns constituted the highest proportion in vocabularies of 101-400 words. The proportion of verbs remained below 20% and the proportion of function words and adjectives remained below 10% in the vocabularies of 1-400 words. There was a significant main effect of time, gender, maternal education and residual hearing with hearing aids before implantation on early receptive vocabulary growth. Time and residual hearing with hearing aids had a significant main effect also on expressive vocabulary growth. Conclusions & ImplicationsVocabulary development of children with bilateral CIs may be delayed. Thus, early vocabulary development needs to be assessed carefully in order to provide children and families with timely and targeted early intervention for vocabulary acquisition.
  • Fox, Barbara A.; Heinemann, Trine (2017)
    In previous interactional studies of formats for utterances doing requests, attention has been given to the initial verb (such as can/could or wonder) and possibly the subject (especially I vs you). The current study examines the main types of grammatical variation found in what we call the " x component," that is the segment after the initial verb and subject. We examine two types of requests: those with can you x and those with wonder x, and we find that variations in the x component in these requests are associated with variations in the unfolding development of the request sequences. We thus suggest that the x component is crucial to the interactional work accomplished by the requesting utterance.
  • Harjunpää, Roni; Alaluusua, Suvi; Leikola, Junnu; Heliovaara, Arja (2019)
    Background: Maxillary advancement may affect speech in cleft patients. Aims: To evaluate whether the amount of maxillary advancement in Le Fort I osteotomy affects velopharyngeal function (VPF) in cleft patients. Methods: Ninety-three non-syndromic cleft patients (51 females, 42 males) were evaluated retrospectively. All patients had undergone a Le Fort I or bimaxillary (n = 24) osteotomy at Helsinki Cleft Palate and Craniofacial Center. Preoperative and postoperative lateral cephalometric radiographs were digitized to measure the amount of maxillary advancement. Pre- and postoperative speech was assessed perceptually and instrumentally by experienced speech therapists. Student's t-test and ManneWhitney's U-test were used in the statistical analyses. Kappa statistics were calculated to assess reliability. Results: The mean advancement of A point was 4.0 mm horizontally (range: -2.8-11.3) and 3.9 mm vertically (range 14.2-3.9). Although there was a negative change in VPF, the amount of maxillary horizontal or vertical movement did not significantly influence the VPF. There was no difference between the patients with maxillary and bimaxillary osteotomy. Conclusions: The amount of maxillary advancement does not affect the velopharyngeal function in cleft patients. (C) 2019 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
  • Lemmetyinen, Sanna; Hokkanen, Laura; Klippi, Anu (2020)
    Background: Left hemisphere stroke often causes a severe communication disorder that is usually attributed to aphasia. While aphasia refers to linguistic problems, communication is also accomplished by voluntarily articulate and gestural movements, which may be compromised due to apraxia. Along with aphasia, apraxia is a common disorder in left hemisphere stroke, which in severe cases can limit the use of verbal and nonverbal communication methods. The discussion about apraxia from a communicative perspective is still scarce, although the disorder is regularly experienced among left hemisphere stroke patients with aphasia. The rehabilitation of the disorder in severe apraxia-aphasia is challenging and recovery is slow. Aims: The purpose of this study is to provide an overview of the research on long-term recovery from apraxia and to discuss the meaning of these findings in observing the recovery of communication abilities in a person with a severe apraxia-aphasia. The search was not restricted to any specific type of apraxia, as this review assumes that communication may be influenced by apraxia in its different manifestations. The review is based on a systematic literature search, which includes English-language studies retrieved from the databases of Ovid Medline, PsycINFO, and Scopus. Main Contribution: Seven long-term follow-up studies of apraxia were found; one case study of apraxia of speech (AOS), four group studies of ideomotor apraxia (IMA), one case study of IMA (and aphasia), and one group study of limb apraxia. Conclusions: The reviewed group studies of patients with left hemisphere stroke indicate that apraxia is a persistent disorder, but the steepest recovery occurs within the first few months post-stroke. Imitation skills and actions involving real-tool use in activities of daily functions show the best recovery. Real-tool use also continues to improve longer, while recovery of gesturing after verbal command may not show clear signs of recovery in the chronic stage post-stroke. There is some evidence that the pace of recovery from oral apraxia and limb apraxia is comparable, and recovery from apraxia and aphasia would not correlate. Some of the studies used only imitation to assess changes in gesturing, which cannot be regarded as an ecologically valid measure to compare gesturing in natural communicative situations or even gesturing after verbal command. Finally, no follow-up studies were found that would have discussed apraxia from a communicative perspective. Overall, the field is lacking research on long-term follow-up of patients with apraxic-aphasic disorder.