Browsing by Subject "PITCH"

Sort by: Order: Results:

Now showing items 1-10 of 10
  • Murtola, Tiina; Malinen, Jarmo; Geneid, Ahmed; Alku, Paavo (2019)
    A multichannel dataset comprising high-speed videoendoscopy images, and electroglottography and free-field microphone signals, was used to investigate phonation onsets in vowel production. Use of the multichannel data enabled simultaneous analysis of the two main aspects of phonation, glottal area, extracted from the high-speed videoendoscopy images, and glottal flow, estimated from the microphone signal using glottal inverse filtering. Pulse-wise parameterization of the glottal area and glottal flow indicate that there is no single dominant way to initiate quasi-stable phonation. The trajectories of fundamental frequency and normalized amplitude quotient, extracted from glottal area and estimated flow, may differ markedly during onsets. The location and steepness of the amplitude envelopes of the two signals were observed to be closely related, and quantitative analysis supported the hypothesis that glottal area and flow do not carry essentially different amplitude information during vowel onsets. Linear models were used to predict the phonation onset times from the characteristics of the subsequent steady phonation. The phonation onset time of glottal area was found to have good predictability from a combination of the fundamental frequency and the normalized amplitude quotient of the glottal flow, as well as the gender of the speaker. For the phonation onset time of glottal flow, the best linear model was obtained using the fundamental frequency and the normalized amplitude quotient of the glottal flow as predictors.
  • Sarkamo, Teppo; Tervaniemi, Mari; Soinila, Seppo; Autti, Taina; Silvennoinen, Heli M.; Laine, Matti; Hietanen, Marja; Pihko, Elina (2010)
    Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA) territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG) measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm) recordings. Fifty-three patients with a left (n = 24) or right (n = 29) hemisphere MCA stroke (MRI verified) were investigated 1 week, 3 months, and 6 months after the stroke. Amusia was evaluated using the Montreal Battery of Evaluation of Amusia (MBEA). We found that amusia caused by right hemisphere damage (RHD), especially to temporal and frontal areas, was more severe than amusia caused by left hemisphere damage (LHD). Furthermore, the severity of amusia was found to correlate with weaker frequency MMNm responses only in amusic RHD patients. Additionally, within the RHD subgroup, the amusic patients who had damage to the auditory cortex (AC) showed worse recovery on the MBEA as well as weaker MMNm responses throughout the 6-month follow-up than the non-amusic patients or the amusic patients without AC damage. Furthermore, the amusic patients both with and without AC damage performed worse than the non-amusic patients on tests of working memory, attention, and cognitive flexibility. These findings suggest domain-general cognitive deficits to be the primary mechanism underlying amusia without AC damage whereas amusia with AC damage is associated with both auditory and cognitive deficits.
  • Dawson, Caitlin; Tervaniemi, Mari; Aalto, Daniel (2018)
    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for f(o) or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.
  • Virtala, Paula; Huotilainen, Minna; Lilja, Esa; Ojala, Juha; Tervaniemi, Mari (2018)
    GUITAR DISTORTION USED IN ROCK MUSIC MODIFIES a chord so that new frequencies appear in its harmonic structure. A distorted dyad (power chord) has a special role in heavy metal music due to its harmonics that create a major third interval, making it similar to amajor chord. We investigated how distortion affects cortical auditory processing of chords in musicians and nonmusicians. Electric guitar chords with or without distortion and with or without the interval of the major third (i.e., triads or dyads) were presented in an oddball design where one of them served as a repeating standard stimulus and others served as occasional deviants. This enabled the recording of event-related potentials (ERPs) of the electroencephalogram (EEG) related to deviance processing (the mismatch negativity MMN and the attention-related P3a component) in an ignore condition. MMN and P3a responses were elicited in most paradigms. Distorted chords in a non-distorted context only elicited early P3a responses. However, the power chord did not demonstrate a special role in the level of the ERPs. Earlier and larger MMN and P3a responses were elicited when distortion was modified compared to when only harmony (triad vs. dyad) was modified between standards and deviants. The MMN responses were largest when distortion and harmony deviated simultaneously. Musicians demonstrated larger P3a responses than nonmusicians. The results suggest mostly independent cortical auditory processing of distortion and harmony in Western individuals, and facilitated chord change processing in musicians compared to nonmusicians. While distortion has been used in heavy rock music for decades, this study is among the first ones to shed light on its cortical basis.
  • Poikonen, Hanna Liisa; Toiviainen, Petri; Tervaniemi, Mari Anni Irmeli (2016)
    The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other.
  • Wikman, Patrik; Rinne, Teemu (2019)
    A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
  • Linnavalli, Tanja; Ojala, Juha; Haveri, Laura; Putkinen, Vesa; Kostilainen, Kaisamari; Seppänen, Sirke; Tervaniemi, Mari (2020)
    CONSONANCE AND DISSONANCE ARE BASIC phenomena in the perception of chords that can be discriminated very early in sensory processing. Musical expertise has been shown to facilitate neural processing of various musical stimuli, but it is unclear whether this applies to detecting consonance and dissonance. Our study aimed to determine if sensitivity to increasing levels of dissonance differs between musicians and nonmusicians, using a combination of neural (electroencephalographic mismatch negativity, MMN) and behavioral measurements (conscious discrimination). Furthermore, we wanted to see if focusing attention to the sounds modulated the neural processing. We used chords comprised of either highly consonant or highly dissonant intervals and further manipulated the degree of dissonance to create two levels of dissonant chords. Both groups discriminated dissonant chords from consonant ones neurally and behaviorally. The magnitude of the MMN differed only marginally between the more dissonant and the less dissonant chords. The musicians outperformed the nonmusicians in the behavioral task. As the dissonant chords elicited MMN responses for both groups, sensory dissonance seems to be discriminated in an early sensory level, irrespective of musical expertise, and the facilitating effects of musicianship for this discrimination may arise in later stages of auditory processing, appearing only in the behavioral auditory task.
  • Kostilainen, Kaisamari; Partanen, Eino; Mikkola, Kaija; Wikström, Valtteri; Pakarinen, Satu; Fellman, Vineta; Huotilainen, Minna (2020)
    Objective: Auditory change-detection responses provide information on sound discrimination and memory skills in infants. We examined both the automatic change-detection process and the processing of emotional information content in speech in preterm infants in comparison to full-term infants at term age. Methods: Preterm (n = 21) and full-term infants' (n = 20) event-related potentials (ERP) were recorded at term age. A challenging multi-feature mismatch negativity (MMN) paradigm with phonetic deviants and rare emotional speech sounds (happy, sad, angry), and a simple one-deviant oddball paradigm with pure tones were used. Results: Positive mismatch responses (MMR) were found to the emotional sounds and some of the phonetic deviants in preterm and full-term infants in the multi-feature MMN paradigm. Additionally, late positive MMRs to the phonetic deviants were elicited in the preterm group. However, no group differences to speech-sound changes were discovered. In the oddball paradigm, preterm infants had positive MMRs to the deviant change in all latency windows. Responses to non-speech sounds were larger in preterm infants in the second latency window, as well as in the first latency window at the left hemisphere electrodes (F3, C3). Conclusions: No significant group-level differences were discovered in the neural processing of speech sounds between preterm and full-term infants at term age. Change-detection of non-speech sounds, however, may be enhanced in preterm infants at term age. Significance: Auditory processing of speech sounds in healthy preterm infants showed similarities to full-term infants at term age. Large individual variations within the groups may reflect some underlying differences that call for further studies.
  • Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti (2017)
    The perceived duration of a sound is affected by its fundamental frequency and intensity: higher sounds are judged to be longer, as are sounds with greater intensity. Since increasing intensity lengthens the perceived duration of the auditory object, and increasing the fundamental frequency increases the sounds perceived loudness (up to ca. 3 kHz), frequency modulation of duration could be potentially explained by a confounding effect where the primary cause of the modulation would be variations in intensity. Here, a series of experiments are described that were designed to disentangle the contributions of fundamental frequency, intensity, and duration to perceived loudness and duration. In two forced-choice tasks, participants judged duration and intensity differences between two sounds varying simultaneously in intensity, fundamental frequency, fundamental frequency gliding range, and duration. The results suggest that fundamental frequency and intensity each have an impact on duration judgments, while frequency gliding range did not influence the present results. We also demonstrate that the modulation of perceived duration by sound fundamental frequency cannot be fully explained by the confounding relationship between frequency and intensity.
  • Jaatinen, Jussi; Pätynen, Jukka; Lokki, Tapio (2021)
    The relationship between perceived pitch and harmonic spectrum in complex tones is ambiguous. In this study, 31 professional orchestra musicians participated in a listening experiment where they adjusted the pitch of complex low-register successively presented tones to unison. Tones ranged from A0 to A2 (27.6–110 Hz) and were derived from acoustic instrument samples at three different dynamic levels. Four orchestra instruments were chosen as sources of the stimuli; double bass, bass tuba, contrabassoon, and contrabass clarinet. In addition, a sawtooth tone with 13 harmonics was included as a synthetic reference stimulus. The deviation of subjects’ tuning adjustments from unison tuning was greatest for the lowest tones, but remained unexpectedly high also for higher tones, even though all participants had long experience in accurate tuning. Preceding studies have proposed spectral centroid and Terhardt’s virtual pitch theory as useful predictors of the influence of the envelope of a harmonic spectrum on the perceived pitch. However, neither of these concepts were supported by our results. According to the principal component analysis of spectral differences between the presented tone pairs, the contrabass clarinet-type spectrum, where every second harmonic is attenuated, lowered the perceived pitch of a tone compared with tones with the same fundamental frequency but a different spectral envelope. In summary, the pitches of the stimuli were perceived as undefined and highly dependent on the listener, spectrum, and dynamic level. Despite their high professional level, the subjects did not perceive a common, unambiguous pitch of any of the stimuli. The contrabass clarinet-type spectrum lowered the perceived pitch.