George Tzanetakis


Canada Research Chair in Computer Analysis of Audio and Music

Tier 2 - 2010-06-01
Renewed: 2016-02-01
University of Victoria
Natural Sciences and Engineering

250-472-5711
gtzan@cs.uvic.ca

Research involves


Performing computer analysis and retrieval of audio and music using signal processing, machine learning and visualization techniques.

Research relevance


This research has the potential to completely transform the way we find and create music as well as improve the ability of computers to understand complex mixtures of sounds.

Making Computers Understand What They Hear


Digital sound has radically transformed music production, transmission and consumption. For the first time in history, an individual can listen to thousands of hours of music all stored digitally in small portable music players or phones. The recording and mixing of music used to require specialized and expensive equipment, but it can now be performed on a inexpensive laptop. At the same time, to a computer, this music is nothing but millions of numbers with no meaning.

The focus of the interdisciplinary research of Dr. George Tzanetakis, Canada Research Chair in Computer Analysis of Audio and Music, is designing algorithms that extract information from audio signals especially music and building tools to create more effective interactions between computers, listeners and musicians. Examples of such information might be the tempo of a song, whether the singer is male or female or what mood it would evoke in a listener.

Most listeners, even those without musical training, are able to understand sound and music at a much deeper level than any computer can currently do. The work done by Tzanetakis combines ideas from digital signal processing, machine learning and human-computer interaction. He has applied his findings to a diverse set of domains that include creating new ways to interact with a large archive (20,000 hours worth of sound) of Orca vocalizations, building a robotic percussionist that can improvise with a human performer and automatically associating labelling tags to music recordings.

Such algorithms and tools have the potential to completely transform the way we find and create music as well as improve the ability of computers to understand complex mixtures of sounds.