About BirdNET
BirdNET is a research collaboration that uses machine learning to recognize birds by sound and make acoustic monitoring accessible to everyone.
Mission
BirdNET aims to lower the barrier to using sound for biodiversity monitoring. By combining deep learning with open tools and citizen science, we help track bird populations and support conservation decisions at local to global scales.
- Provide high-quality bird sound identification models.
- Develop tools for large-scale passive acoustic monitoring.
- Engage birders and the public through intuitive apps and web tools.
- Support researchers with documented, reproducible workflows.
Collaboration
BirdNET is a joint effort between:
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology
- Chair of Media Informatics, Chemnitz University of Technology
Supported by researchers, engineers, educators, and community contributors.
Why acoustic monitoring?
Many bird species are more easily detected by sound than sight—especially in dense vegetation, at night, or during migration. Passive acoustic monitoring:
- Captures presence and activity without human disturbance.
- Scales across seasons and remote habitats.
- Creates archives for long-term change detection.
- Enables multi-species monitoring from a single sensor.
Why AI can help
Manual review of thousands of hours of audio is not feasible. Machine learning:
- Automates species identification at scale.
- Extracts consistent features from noisy soundscapes.
- Speeds up survey workflows and reduces cost.
- Allows rapid iteration as new training data become available.
BirdNET’s role in acoustic biodiversity monitoring
BirdNET provides models and open tools that turn audio into species presence information. It is used in backyard stations, migration studies, protected area assessments, and citizen science challenges.
- Core models powering apps and edge devices.
- Embeddings reused for new classification tasks.
- Integration with pipelines for large-scale soundscape analysis.
- Open source components for transparency and reproducibility.
Key design ideas
- Robust to background noise and overlapping calls.
- Optimized for real-time (mobile / edge) and batch (server) use.
- Embeddings enable downstream filtering and custom models.
- Continuous updates as new labeled data are curated.
Training data
BirdNET models are trained on curated bird vocalizations from multiple public and partner collections, filtered for quality and species consistency. Preparation includes:
- Segmenting recordings into short windows.
- Removing excessive noise or human speech.
- Balancing classes to reduce dominance of common species.
- Augmenting audio (mixing, shifting, filtering) to improve robustness.
Model architecture (simplified)
Audio windows are converted to spectrogram features and fed into a deep neural network (convolutional and residual layers) producing:
- Embeddings: compact numeric representation of the sound.
- Class scores: per-species confidence estimates.
Models are exported to formats (e.g. TensorFlow Lite) suitable for mobile, edge, and server environments.
Recognition workflow
1. Capture
Microphones or recorders collect continuous audio (e.g. WAV). Time and (optionally) location metadata are stored.
2. Preprocess
Audio is normalized, split into short windows (e.g. 3 s), converted to spectrograms, and passed through noise filtering.
3. Infer
Each window is fed into the BirdNET model → embeddings → species confidence scores and timestamps.
4. Filter
Apply thresholds, region or season filters, and optional overlap merging to reduce false positives.
5. Aggregate
Summarize detections: species lists, daily activity curves, occupancy tallies, migration timing indicators.
6. Interpret
Use outputs for trend analysis, site comparison, conservation planning, or community engagement dashboards.
Funding
Work at the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang supporting innovative conservation technologies.
Development of BirdNET is supported by the German Federal Ministry of Research, Technology and Space (FKZ 01|S22072), the German Federal Ministry for the Environment, Climate Action, Nature Conservation and Nuclear Safety (FKZ 67KI31040E), the German Federal Ministry of Economic Affairs and Energy (FKZ 16KN095550), the Deutsche Bundesstiftung Umwelt (project 39263/01) and the European Social Fund.
Acknowledgments
We acknowledge all supporters enabling open, global acoustic biodiversity monitoring through BirdNET.
Partners
BirdNET is a joint effort of partners from academia and industry. Without these partnerships, this project would not be possible.
Representative partner logos. See publications and tools pages for additional collaborators.