Research/Technology
The Research/Technology track is designed for students interested in
musically-related research or developing new music technology. Students
in this track study with a departmental advisor, and if applicable to a
specific research topic, with another member of the broader Johns Hopkins
academic community. Research/Technology track students work closely with
practicing composer and performers in developing new computer music
systems. Areas of research may include psychoacoustics, perception,
hardware or software synthesis and control techniques, algorithmic
composition and performance systems, and other related topics.
The Hall-O-Deck (Virtual Concert Hall) project and multiprocessor DSP
strategies
Utilizing hardware provided by the Intel grant, The
Computer Music Department is developing a system with which a performer
can rehearse in a practice room, and his instrument's acoustic signal will
be processed by NT workstations in such a way that the acoustics of a
specific concert hall will be replicated. This is accomplished by the
performer playing into a microphone, applying digital signal processing to
the signal, and playing back signals into the practice room (through small
multimedia speakers) which replicate the acoustics of a desired hall.
Initial experiments with "off-the-shelf" processing equipment have been
promising. We will, however, need to create models for specific concert
halls and develop our own signal processing algorithms.
Intel Grant Projects
The Computer Music Department recently received a grant from the Intel
Corporation to pursue several research projects including: The
PARIS/Ensoniq/Intelligent Devices Project; Creating a realtime,
interactive, networked composition/performance system; and Developing the
Virtual Concert Hall.
See project proposals.
See Peabody News.
See Intel Press Releases.
Other Research
Comparison of pitch detection algorithms
Researchers: Lilit Yoo, Ichiro Fujinaga
Target: ICMC99 (abstract)
(paper PDF)
Transmission of MIDI over UDP/IP for distance education
Researchers: John Young, Ichiro Fujinaga
Target: ICMC99
(abstract)
(paper PDF)
Multimodal input capacities
Researchers: Dave Sullivan, Ichiro Fujinaga
Target: ICMC99 (abstract)
Zeta violin latencies and techniques
Zeta violin converts analog sound to MIDI via multi-channel IVL
PitchRider
hardware.The problem is the delay in calculating the pitch. The purpose
of this study is to measure the latency which depends on many factors such
as pitch, articulation, dynamics, etc. Also new violin techniques will
be investigated to minimize the latency.
Researchers: Lilit Yoo, Ichiro Fujinaga, Geoffrey Wright
Targets: SEAMUS98 (abstract)
paper read at SEAMUS98, April 18, 1998 (paper PDF)
Real-time software synthesis for psychoacoustic experiments
Introduction to using real-time software synthesis, Super Collider,
MPS, and Pd as valuable tools for psychoacoustic experiments. Small
example
experiments will be conducted.
Researchers: Stephan Moore, David Sullivan, Ichiro Fujinaga
Targets: ICMPC98 (
abstract)
(notes)
(paper HTML)
(paper PDF)
Latency of audio and MIDI data over LANs
Measuring latencies of audio and MIDI data over a typical music school
LAN using 10Mb switch, 100Mb hub and 100Mb switch.
Researchers: Tony Willert, Ichiro Fujinaga
Target: ICMC
Violin vibrato technique and its implication for pitch perception
Do violin players slightly flat when playing vibrato?
Researchers: Lilit Yoo, Ichiro Fujinaga
Target: ICMC98
(abstract)
The effect of vibrato on response time in determining the pitch
relationship
of violin tones
Can the vibrato hide bad intonation?
Researchers: Lilit Yoo, Stephan Moore, David Sullivan, Ichiro Fujinaga
Target: ICMPC98
(abstract)
(paper PDF)
Implementation of exemplar-based learning model for music cognition
The exemplar-based learning model is proposed as an alternative
approach
to modeling many aspects of music cognition.
Researcher: Ichiro Fujinaga
Target: ICMPC98
(abstract)
(paper PDF)
Converting Levy Sheet Music Collection to MIDI
Using Optical Music Recognition system, convert the collection of sheet
music to computer-readable format including MIDI, which then can be heard
via web clients.
Researcher: Ichiro Fujinaga
NSF grant (1999-2001)
abstract, part
of Digital Libraries Initiative - Phase 2
Timbral recognition using lazy learning
How do we recognize timbre? How well can we recognize using just the
steady-state portion of musical timbre? Can machines do this in
realtime?
Researcher: Ichiro Fujinaga
Machine recognition of timbre using steady-state tone of acoustic
musical instruments
Target: ICMC98 (abstract)
(paper PDF)
Toward realtime recognition of acoustic musical instruments
Target: ICMC99
(abstract)
(paper PDF)