Life of the Mind
Poster Presentation - Signal Crate: A Deterministic Modular Framework for Real-time Sound Synthesis, Audio Processing, and Algorithmic Control
November 12-13, 2025 @ SUNY Oneonta; Oneonta, NY
Signal Crate is a terminal application written entirely in C for live audio processing and performance that is lightweight and easily expandable. Blending the worlds of control voltage,
scripted event programming with modular synthesizer design. All modules are controllable via the
computer keyboard, one another, or using OSC.
Life of the Mind Info
SingWell Symposium
Presentation - Developing predictive models of the relationship between acoustic descriptors of the voice and disease severity in group singing participants living with Parkinson's disease (w/Johanna Devaney)
November 21, 2025 @ University of Toronto; Toronto, ON, Canada
This presentation will describe the current status of our project on assessing which acoustic descriptors related to vocal production can be used to develop predictive
machine learning models of other data collected in SingWell projects.
We will begin by describing our automated workflow for estimating acoustic descriptors from the speech
recordings. Specifically, the extensions we made to the pyAMPACT (Automatical Music Performance Analysis and Comparison Toolkit in python) package to work on speech data.
pyAMPACT uses a score- (or transcript-) informed approach for estimating note-level performance parameters from musical audio. The extension to work with speech audio first
included evaluating existing speech-to-text aligner tools to assess whether they would work well for this data. We then developed a pipeline to process the output of the aligner
to work with pyAMPACT in the same way that score-aligned music audio is processed. The next stage involved the implementation of speech-specific descriptors, including Acoustic Voice
Quality Index( AVQI), Cepstral Peak Prominence Smoothed (CPPS), Glottal-to-Noise Excitation ratio (GNE), Harmonic-to-Noise Ratio (HNR), jitter, and shimmer. All of which are
calculated on text-aligned speech audio.
We will then describe our analysis of pre- and post-recordings from 12 participants living with Parkison’s. The singing data are the participants singing isolated and the speech
data are recordings of the grandfather passage. After using an alignment approach to segmenting both the singing and audio, we estimated a range of singing- and speech-related descriptors.
We are currently working on the analysis of these and assessing which acoustic features correlate with disease severity and Hohn & Yahr disease stage and anticipate being able to
present these results at the SingWell symposium.
SingWell Symposium 2025