International Conference on Computational and Cognitive Musicology

Poster Presentation - "Unveiling the 'Mysterious' Notes in Chinese Qinqiang Opera Through Computational Methods" (w/Tengyue Zhang and Johanna Devaney)

October 8-10, 2025 @ Aalborg University; Aalborg, Denmark

This paper investigates three microtonally ambiguous “mysterious” notes within the traditional Chinese qinqiang opera, long debated in scholarly literature. Drawing on a corpus of 18 audio clips from 9 archival recordings spanning over three decades (1991–2023), we apply the pyAMPACT toolkit—to perform note-level audio analysis and estimate precise pitch and onset data. Our findings challenge prior characterizations of these notes as microtonal inflections. Two scales, the florid tone scale and the bitter tone scale, are used in qinqiang opera. The florid tone scale exhibits controversy regarding the tuning of scale degrees 4 and 7, while the bitter tone scale presents conflicting claims concerning the tuning of scale degrees 3, 4, and 7. Through computational analysis, we determined that scale degrees 4 and 7 in the florid tone scale, as well as scale degrees 3 and 4 in the bitter tone scale, demonstrate no consistent deviations from equal temperament. However, scale degree 7 in the bitter tone scale consistently appears a semitone lower. These results suggest a historical shift from traditional microtonal practice toward standardized diatonic intonation over the past 34 years. The paper offers a revised interpretation of the florid and bitter tone scales and demonstrates how computational methods can support musicological inquiry in non-Western traditions.

ICCCM Website


SingWell Symposium

Presentation - Developing predictive models of the relationship between acoustic descriptors of the voice and disease severity in group singing participants living with Parkinson's disease (w/Johanna Devaney)

November 21, 2025 @ University of Toronto; Toronto, ON, Canada

This presentation will describe the current status of our project on assessing which acoustic descriptors related to vocal production can be used to develop predictive machine learning models of other data collected in SingWell projects.

We will begin by describing our automated workflow for estimating acoustic descriptors from the speech recordings. Specifically, the extensions we made to the pyAMPACT (Automatical Music Performance Analysis and Comparison Toolkit in python) package to work on speech data. pyAMPACT uses a score- (or transcript-) informed approach for estimating note-level performance parameters from musical audio. The extension to work with speech audio first included evaluating existing speech-to-text aligner tools to assess whether they would work well for this data. We then developed a pipeline to process the output of the aligner to work with pyAMPACT in the same way that score-aligned music audio is processed. The next stage involved the implementation of speech-specific descriptors, including Acoustic Voice Quality Index( AVQI), Cepstral Peak Prominence Smoothed (CPPS), Glottal-to-Noise Excitation ratio (GNE), Harmonic-to-Noise Ratio (HNR), jitter, and shimmer. All of which are calculated on text-aligned speech audio.

We will then describe our analysis of pre- and post-recordings from 12 participants living with Parkison’s. The singing data are the participants singing isolated and the speech data are recordings of the grandfather passage. After using an alignment approach to segmenting both the singing and audio, we estimated a range of singing- and speech-related descriptors. We are currently working on the analysis of these and assessing which acoustic features correlate with disease severity and Hohn & Yahr disease stage and anticipate being able to present these results at the SingWell symposium.

SingWell Symposium 2025