Justin Kitzes and his team at the University of Pittsburgh spend a lot of time listening to birds.
Kitzes is an assistant professor of spatial macroecology at Pitt and is working on classifying hundreds of birds and their calls from across the country for an artificial intelligence database. For many birdwatchers, learning to recognize the tweet or chirp of a bird could take months, but Kitzes’ automated process would identify these birds for users.
The team places recorders called AudioMoths into zip-locked bags and wraps them around trees. The recorders are about the size of a deck of cards, and pick up every chirp and song.
“Essentially, it means we eavesdrop on animals,” Kitzes said.
In the lab, the recordings are converted into spectrograms, a visual representation of the audio measured in frequency and time. The team sifts through hundreds of hours of recordings, picking out bird calls and labeling the species.
Once enough calls from a species are identified by humans, they can teach the computer how to look out for these different bird calls by identifying specific patterns in the spectogram images.
“We can set up one of these microphones in a field station and leave it for years and years and years recording every single day,” Kitzes said. “So you actually have the opportunity to see in much more detail how populations are changing over time.”
The team is in the process of modeling their first species, the Eastern Towhee, a common bird in their field station. The distinctive song sounds like the phrase, “Drink your tea!” said Tessa Rhinehart, a research programmer on the project.
Recently, the project was awarded a Microsoft and National Geographic Society AI for Earth grant. Kitzes said he hopes to release an easy-to-use database of 600 birds for user’s own recordings by the end of this year.
WESA receives funding from the University of Pittsburgh.