Download Comparing Acoustic and Digital Piano Actions: Data Analysis and Key Insights The acoustic piano and its sound production mechanisms have been
extensively studied in the field of acoustics. Similarly, digital piano synthesis has been the focus of numerous signal processing
research studies. However, the role of the piano action in shaping the dynamics and nuances of piano sound has received less
attention, particularly in the context of digital pianos. Digital pianos are well-established commercial instruments that typically use
weighted keys with two or three sensors to measure the average
key velocity—this being the only input to a sampling synthesis
engine. In this study, we investigate whether this simplified measurement method adequately captures the full dynamic behavior of
the original piano action. After a brief review of the state of the art,
we describe an experimental setup designed to measure physical
properties of the keys and hammers of a piano. This setup enables
high-precision readings of acceleration, velocity, and position for
both the key and hammer across various dynamic levels. Through
extensive data analysis, we examine their relationships and identify
the optimal key position for velocity measurement. We also analyze
a digital piano key to determine where the average key velocity is
measured and compare it with our proposed optimal timing. We
find that the instantaneous key velocity just before let-off correlates
most strongly with hammer impact velocity, indicating a target
for improved sensing; however, due to the limitations of discrete
velocity sensing this optimization alone may not suffice to replicate
the nuanced expressiveness of acoustic piano touch. This study
represents the first step in a broader research effort aimed at linking
piano touch, dynamics, and sound production.
Download Spatializing Screen Readers: Extending VoiceOver via Head-Tracked Binaural Synthesis for User Interface Accessibility Traditional screen-based graphical user interfaces (GUIs) pose significant accessibility challenges for visually impaired users. This
paper demonstrates how existing GUI elements can be translated
into an interactive auditory domain using high-order Ambisonics and inertial sensor-based head tracking, culminating in a realtime binaural rendering over headphones. The proposed system
is designed to spatialize the auditory output from VoiceOver, the
built-in macOS screen reader, aiming to foster clearer mental mapping and enhanced navigability.
A between-groups experiment
was conducted to compare standard VoiceOver with the proposed
spatialized version. Non visually-impaired participants (n = 32),
with no visual access to the test interface, completed a list-based
exploration and then attempted to reconstruct the UI solely from
auditory cues. Experimental results indicate that the head-tracked
group achieved a slightly higher accuracy in reconstructing the interface, while user experience assessments showed no significant
differences in self-reported workload or usability. These findings
suggest that potential benefits may come from the integration of
head-tracked binaural audio into mainstream screen-reader workflows, but future investigations involving blind and low-vision users
are needed.
Although the experimental testbed uses a generic
desktop app, our ultimate goal is to tackle the complex visual layouts of music-production software, where an head-tracked audio
approach could benefit visually impaired producers and musicians
navigating plug-in controls.