Bioacoustic Studies
Bioacoustics
Wikipedia states that Bioacoustics is a cross-disciplinary science that combines biology and acoustics. Mcloughlin1 says Bioacoustics is the study of the production, transmission and reception of animal sounds. A few years ago I invested in an moderately priced recording device that recorded the calls of bats and an internal program that identified the bat species. Today that device costs about 2/3rd’s of what I paid and the program is much better. I found it extremely interesting that I could know what bats were in the vicinity of my house or flying around my favorite lake. I was able to identify five species of bats, one moderately rare and I never saw them!
This is the beauty of this new science. You can “hear” a target species even though it is not in sight and maybe a mile away. Audio recorders can cover much more area than a camera. Bioacoustic methods are getting more and more automated. Researchers are using remote recorders that automatically collect data. These recorders are called passive acoustic monitors (PAM) or autonomous recording units (ARU).2 “If it chirps, squawks, crows, howls, whistles, peeps, croaks, gurgles or belches, then the PAM can capture the sound, which is stored on a memory card and then analyzed using image recognition software.”2 An example of this type of technology is Merlin® (www.MerlinBirdID.com), a phone app developed by the Cornell Lab of Ornithology (www.birds.cornell.edu/home/).
This opens up avenues of research that is far more extensive. PAM’s can be used like trail cameras. Analysis of animal sounds can be used for individual detection, species detection, location detection and population monitoring. Bioacoustics has been used to track whales, bats and frogs. But the practice has turned out to be particularly effective for studying birds. That’s because their songs are so clear and consistent that recordings now can be mined for powerful scientific data. For example by using recorders in urban environments, scientists have learned that urban birds are singing louder3 and at a higher pitch4 than their rural counterparts. Connor Wood led a study of the endangered spotted owl across California’s 38,000 square-mile Sierra Nevada range. Looking at the size of the landscape, he was undecided on how to do the study until he heard about bioacoustics. “Over two years, he and his crew set up a network of hundreds of recording devices the size of lunchboxes, moving them gradually across the range to capture “soundscapes” of the forest. They indeed picked up the hoots of the owls as they searched the data.”5 PAMs have many uses and applications, but one of the primary goals for ecologists is being able to collect and analyze data at large spatial scales to monitor status, trends, distribution and habitat use of wildlife species — all of which are important targets of management or indicators of successful management.2
In marine mammal science, the most common method of determining the location of an animal is known as passive acoustic sonar. Passive acoustic sonar place an array of evenly spaced microphones that records the sound of an individual, and then they calculate the difference from the time of arrival of the vocalization between all microphones and triangulate the location.1
Wildlife Acoustics (www.wildlifeacoustics.com) produce several types of monitors, including some that record higher frequencies for detecting bats. They also produce a device (Echo Meter Touch 2) that turns a smartphone into an interactive bat detector. SWIFT (www.birds.cornell.edu/ccb/swift/) is a terrestrial passive acoustic recording unit produced by the Cornell Lab of Ornithology. Audiomoth (www.openacousticdevices.info) is a unit developed by Open Acoustic Devices (only $70). The unit is a full-spectrum acoustic logger than can detect a wide range of frequencies. Frontier Labs (frontierlabs.com.au) produce advanced bioacoustic audio recorders (BAR) with a built-in GPS.
PAMs are great. They collect a lot of data. However, this brings up a problem. How do you sort, process and identify all of the species in the data? The audio files need to be converted to spectrograms. Fortunately there are commercial software programs available to help sort and identify calls. Commercial packages go beyond playback and viewing of spectrograms by providing methods for detection, measurement and other analyses. A couple of packages, Kaleidoscope (Wildlife Acoustics) and Raven (Cornell Lab of Ornithology), can be helpful.
Populations of northern bobwhite quail have been declining across their range, with declines ranging from 68-75% in Oklahoma and Texas during the past five decades. From 2008 to 2018, Noble Research Institute2 “conducted spring whistle counts for bobwhite to look at trends in populations as they relate to environmental and habitat conditions. Starting in 2019, PAMs replaced traditional human surveys, allowing researchers to monitor 29 sites simultaneously across two study sites. PAMs collected data for three days during four separate sessions (12 days total) during the spring of 2019 and 2020, which coincides with the calling activity of many species of birds. The PAMs now offer a permanent record of all recorded species that are important for understanding biodiversity, changes in populations and habitat use. Current research is developing acoustic matching templates for other species of conservation concern, or that are considered game, indicator or umbrella species. For example, dickcissels (Figure 5) and eastern meadowlarks (Figure 6) also have been experiencing long-term declines in their populations, so managers may want to keep a close eye on whether these species are present and in what numbers if present.”2
Similar to the example above, Dr. Elena West received a grant from the Legislative‐Citizen Commission on Minnesota Resources (LCCMR) entitled “Bioacoustics for broad-scale species monitoring and conservation” to study RHWOs. The objectives of this grant are to –
1. Identify the current breeding distribution of red-headed woodpeckers in Minnesota and collect information on occupancy, reproduction, and breeding habitats.
2. Develop a monitoring protocol to rigorously detect red-headed woodpecker population trends and responses to habitat management.
Jerry Bahls
1. Mcloughlin MP, Stewart R, McElligott AG. 2019 Automated bioacoustics: methods in ecology and conservation and their potential for animal welfare monitoring. J. R. Soc. Interface 16: 20190225. http://dx.doi.org/10.1098/rsif.2019.0225
2. Mike Proctor, senior research associate, Stephen Webb, Ph.D., staff scientist, December 2020 | VOL 38 | Issue 12, NOBLE RESEARCH INSTITUTE
3. Henrik Brumm (2004). “The impact of environmental noise on song amplitude in a territorial bird”. Journal of Animal Ecology. 73(3): 434–440. doi:10.1111/j.0021-8790.2004.00814.x. S2CID 73714706.
4. Slabbekoorn, H. & Peet, M. (2003). “Birds sing at a higher pitch in urban noise”. Nature. 424 (6946): 267. Bibcode:2003Natur.424..267S. doi:10.1038/424267a. PMID 12867967. S2CID 4348883.
5. Anders Gyllenhaal, https://www.washingtonpost.com/science/with-bioacoustics-conservationists-try-to-save-birds-through-their-songs/2020/01/10/8b800048-0c9a-11ea-bd9d-c628fd48b3a0_story.html
Bioacoustics and Red-headed Woodpecker Conservation
by Dr. Elena West
My research on red-headed woodpeckers has taken many exciting turns over the last three years and I’m excited to share more about a new project that I initiated this past summer [2021] that builds on and expands this work to other areas of the state.
My research at the Cedar Creek Ecosystem Science Reserve has largely focused on a deep dive into the ecology and behavior of red-headed woodpeckers. My collaborators and I have collected a wide range of data that are helping to shed light on this species, its habitat, and other species that are a part of Cedar Creek’s oak savanna ecosystem. The data we’ve gathered will also help inform best management practices and recommendations for land managers working on habitat restoration around the state.
In order for conservation and restoration efforts to be successful, however, we must first identify where red-headed woodpeckers are located—and this has been challenged by a number of factors including the species’ apparent rarity on the landscape, survey coverage, and the fact that woodpeckers typically vocalize less frequently compared to most songbirds. There are currently only two known locations in the state with relatively stable red-headed woodpecker populations (Cedar Creek and Camp Ripley Minnesota National Guard Training Facility), and beyond that we have very little information on the species’ statewide distribution and where individuals may be successfully breeding. This kind of information is critical for any effort to facilitate species recovery at the statewide scale.
To tackle this challenge, my team and I will be using acoustic recording units (ARUs) to survey for red-headed woodpeckers throughout the state. ARUs are small, relatively inexpensive devices that can be programmed to record sound for long periods—and for many species they match the effectiveness of human surveys. ARUs can detect sounds from animals like birds and frogs, wind and rain, and even human-made sounds like cars and planes. The autonomous sensing approach is also a good alternative to in-person surveys becauase devices can be left out for long periods of time to gather as much data as needed across a much larger area and in areas that may be difficult to access. Sending out human observers to accomplish the same task at such a scale would require significantly more time, people and funding.

The autonomous sensing approach is also great for surveying rare species and species that vocalize less frequently (like red-headed woodpeckers!) because they can generate long, continuous recordings that vastly increase the amount of time that researchers can sample for those infrequent vocalizations.
Many acoustic monitoring tools evolved from the work of geophysicists who used long-term monitoring systems to detect low-frequency sounds underwater. Research using sound was historically cumbersome and imprecise, but this changed when companies like Microsoft and Google shared their artifical intelligence work that allowed for significant improvements in speech recognition. We’re currently in the midst of a revolution in terms of autonomous sensing approaches that now allow us to combine autonomous recorders with computer programs that automatically identify the sounds captured on those devices. These machine learning and artifical intelligence tools can now be applied to wildlife monitoring, ecological research, and conservation. Computer algorithms can be more easily trained to recognize and distinguish between sounds from birds, frogs, mammals, and even insects.
We still have a lot of work to do in terms of dealing with the mountains of data these devices can collect, improving the devices themselves, and refining the algorithms that are used to mine through the audio data. Researchers like myself are working hard to push this field forward in ways that make field work, data collection and analysis more efficient and effective so that we can address the myriad environmental challenges we face.
I began my bioacoustics research this summer with a pilot season at Cedar Creek, where we placed ARUs on the landscape to test their feasibility, record hundreds of hours of red-headed woodpecker calls to build a sound library, and trouble shoot. Next spring and summer we’ll be setting up a network of recording devices throughout the red-headed woodpecker statewide range to capture “soundscapes” that we hope will provide the data we need to more accurately pinpoint where birds are located. We also hope to use what we learn from this study to develop a monitoring protocol to detect red-headed woodpecker population trends and responses to habitat management throughout the state.
Another exciting aspect of this work is that the audio data we collect are a permanent record of a soundscape and all of its biodiversity at a specific place in time, so although we might be studying a single species right now, down the road we (or someone else) can use our recordings to answer new questions or analyze different species.
This project is funded in part through grants from the Environment and Natural Resources Trust Fund, the University of Minnesota, the Audubon Chapter of Minneapolis, the Red-headed Woodpecker Recovery Project, Earth Cloud, Patagonia, and the Minnesota River Valley Audubon Chapter.
Please consider donating to this project! Donations are one of the best ways to help and support costs that allow us to carry out this research. Please make a donation.
Spectrograms of Cedar Creek Red-headed Woodpecker Primary Calls and Drum
by Siah St.Clair and Dr. Elena West
The research focus for the upcoming red-headed woodpecker field season will include the deployment of autonomous recording units (ARUs) around the state of Minnesota to record the presence (or absence) of red-headed woodpeckers. As part of this process, the calls made by individual red-headed woodpeckers at Cedar Creek Ecosystem Science Reserve last year are being sorted into spectrogram files. This will create a library of specific spectrograms that will be used as a template for machine learning algorithms to search for and detect the calls of interest from the many thousands of hours of sound files researchers will have at the end of the season.
Below are example spectrograms of the four primary sounds made by red-headed woodpeckers at Cedar Creek last summer, which were collected from ARUs. The pictures are grainy and pixilated but give an idea of the range of these sounds.

Kweeah and then Drum

Churr-Series left Field sparrow on right

One Churr-Series Call

5 Chatter Calls in 3 seconds
The “Kweeah” call is the most common territorial call of the red-headed woodpeckers at Cedar Creek. It is loud and can be heard from over 100 yards away. The Churr-series call is a slurred series of “Churr” sounds that seem to be all one call to human ears. The Churr-series call is often heard at the culmination of a territorial dispute with the pair together. It is loud and can last for 5 seconds. Drumming by red-headed woodpeckers often follows a series of “Kweeah’ and/or “Churr” calls. The Drum lasts for one half to one second and is at a rate of around 20-22 hits per second, at least with the red-headed woodpeckers recorded at Cedar Creek that was watched on the spectrogram
The “Chatter” call is a bit softer, and lasts about 2 tenths of a second, with 3-4 chatters inside that time frame. This call is most often heard when the red-headed woodpecker perceives a threat or some kind of intrusion into its territory.
As can be seen, red-headed woodpecker calls cover most of the spectrogram range, top to bottom, especially if the bird is close. Compare that with the one spectrogram that shows a field sparrow as a narrow band of black on the right, and the red-headed woodpecker Churr-series call going from top to bottom of the range of sound on the left.