47. Building an acoustic recognition database Tadarida-L (Toolbox Animal Detection on Acoustic Recordings
Sonochiro (Biotope) is ontwikkeld door Yves Bas maar is Tadarida-L gegaan. Aanbevolen door Petterson. Herkent 63%.
Studenten aan de UK Universiteit hebben Auto Rec hardware AudioMoth ontwikkeld (Open Acoustic Devices) een klein toestelletje dat flink aan populariteit aan het winnen is. https://www.openacousticdevices.info/audiomoth
BatIdent is het Duitse programma van EcoCops dat bij BatCorder hoort en dat erg goed scoort op herkenning (81%).
Kaleidoscoop op 71%, Tadarida is zeker niet slechter dan Kaleidoscoop als de opnamen maar goed zijn.
BatExplorer is het programma van de BatLogger en herkent 53%.
Tadarida-L (Toolbox Animal Detection on Acoustic Recordings) is HulpSoftware, Open Source een Toolbox, altruistisch geschreven door Yves Bas waarbij je zelf je eigen classifier software kunt schrijven. Het werkt adhv herkenning van signalen. Used for birds, bush crickets and bats.
Tadarida-D=Detection
Tadarida-C=Classification
Tadarida-L=Labeling
https://github.com/YvesBas
In other programs you can get one ID for a wav file but now it can give more IDs onthe sound events. You get a probabilty rating for the Identification. For the final output it will summarrie it. Currently the NIOZ database is not strong enough but NIOZ wants to have identification and position of the bats.
Stewart Newson Norfolk Classifier in Belgium and UK.
STewart heeft voor de UK een classifier gebouwd om de UK soorten te determineren. Classifier UK uitbreiden met BENELUX soorten en classifier leert van zijn fouten. Tadarida is zeker niet slechter dan Kaleidoscoop als de opnamen maar goed zijn.
https://openresearchsoftware.metajnl.com/articles/10.5334/jors.154/
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=50m
https://photos.app.goo.gl/iCaBSZHkzw9cNW9K9
Auto Rec Auto ID software
Auto Rec Auto ID software
De tweede lezing (vanaf 52.18 min) is door Marc van der Sijpe en Claire Hermans : 'Introductie in auto-recording en auto-identificatie en Explaination how works Tadarida and the BTO classifier of Tadarida'
Deze presentatie wordt in het Nederlands en Engels gegeven. Als eerste geeft Marc Van De Sijpe een introductie over auto- recording en auto-identificatie, waaronder Tadarida, een Open Sourceclassificatie in de programmeer- en statistiektaal R. Daarna neemt Claire ons in het Engels mee hoe Tadarida werkt, waarna ook de BTO classifier wordt gedemonstreerd.
De derde lezing (vanaf 1.41.17uur) is wederom door Claire Hermans en geeft een preview op haar project: 'Light on landscape' waarbij ze vertelt over de werking van Microphone-arrays om vliegpaden van vleermuizen te reconstrueren.
https://photos.app.goo.gl/iCaBSZHkzw9cNW9K9
In Frankrijk is ook een Monitoring project gebaseerd op Tadarida en ze willen hun Franse herkennigns database ook niet ter beschikking stellen. https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13198
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=11m44s
De eerste lezing (vanaf 11.44 min) is door Johann Prescher & Dirk Oosterholt : 'Grootoorvleermuizen leggen grote afstand af door coulissenlandschap om in moeras te foerageren.'
Tijdens onderzoek naar effectiviteit hopovers als verbinding in coulissenlandschap voor Gewone Grootoorvleermuis werd ontdekt dat deze vleermuizen lange afstanden aflegden om te foerageren in moerasgebied. Tot voor kort werd aangenomen dat Gewone Grootoorvleermuizen niet zulke grote afstanden afleggen.
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=52m
De tweede lezing (vanaf 52.18 min) is door Marc van der Sijpe en Claire Hermans : 'Introductie in auto-recording en auto-identificatie en Explaination how works Tadarida and the BTO classifier of Tadarida'
Deze presentatie wordt in het Nederlands en Engels gegeven. Als eerste geeft Marc Van De Sijpe een introductie over auto- recording en auto-identificatie, waaronder Tadarida, een Open Sourceclassificatie in de programmeer- en statistiektaal R. Daarna neemt Claire ons in het Engels mee hoe Tadarida werkt, waarna ook de BTO classifier wordt gedemonstreerd
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=101m Effect of light intensity on Habitat loss De derde lezing (vanaf 1.41.17uur) is wederom door Claire Hermans en geeft een preview op haar project: 'Light on landscape' waarbij ze vertelt over de werking van Microphone-arrays om vliegpaden van vleermuizen te reconstrueren.
https://openresearchsoftware.metajnl.com/articles/10.5334/jors.154/
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=47m (OpenSource Introduction)
Tadarida Open Software Toolbox
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=59m (English)
DIY Zelfbouw Teensybat-VLEN Bat detector/recorder (Vleermuis) Open Source (45)
How to start a new BAT Classification database (Classifier) Together.
Are there people interessted in building a database.
Mark gave some identified recordings. With this TADARIDA L
https://www.youtube.com/watch?v=z5C3wLsyGdE&t=111m
https://openresearchsoftware.metajnl.com/articles/10.5334/jors.154/
There is an R programm..Any user that subscribes can upload a ffile in a simple window to the Cloud in England and send back a .csv with the result. The volunteers in the UK throw away their recordins so now they are storing the uploading the files in the cloud. The identification will be stored in the UK Waarneming.nl. People can tell if the detection is wrong and the R programm can be adapted. Unknown if it will be Open Source if not Funding not Open Source. Free for volunteers
REQUIREMENTS:
Audio Bat Wav file with the namespecies and location of the bat in the file
NEM VT has recordings
Bestimmung von Fledermausrufaufnahmen und Kriterien für die Wertung von akustischen Artnachweisen - Teil 1
The original FFT is processed every 0.67msec. If you use a zoom factor of 4, you only take every fourth sample. Thus, it needs four times longer until you have acquired enough samples for the FFT, thus 2.67msec. Now, the trick is that you do not wait until you have all the samples collected, but you allow for „overlap“ in the samples and perform the FFT with the same rhythm of 0.67msec. [please note that this also takes four times the processing power compared to non-overlapping samples for the FFT and also takes more memory]
I have collected more information on the ZoomFFT in this Wiki:
https://github.com/df8oe/UHSDR/wiki/...ode-=-Zoom-FFT
I am not experienced in programming audio library blocks, but my gut feeling is, it could be easier to just use a queue object from the lib to get the samples and perform all the calculations in the main loop and not inside an audio lib object (because there are different sample rates involved). However, for people also interested in using the ZoomFFT, if you design a specific audio library object, you will get much more credit ;-).
Best wishes,
BTW: this brandnew publication will rapidly become the professional standard for the identification of bat calls from spectrograms in Germany. Maybe it also helps others with ID of bats in Central Europe.
https://www.bestellen.bayern.de/appl...tMUG,ALLE:x)=X
Bestimmung von Fledermausrufaufnahmen und Kriterien für die Wertung von akustischen Artnachweisen - Teil 1
=====================
A little off-topic, OK, but now that the bat detector works, you need some info about bats and their bioacoustical properties and how to identify them by their ultrasound calls.
This is the "bible" of bat bioacoustics, if you want to know everything about bats and their calls:
Barataud et al. (2015): Acoustic ecology of European bats: Species identification, study of their habitats and foraging behaviour. -
http://www.nhbs.com/title/199366/aco...-european-bats --> not only for european bat friends, the bioacoustics & methods section is universal
There are many many nice websites on batcalls, here is a subjective choice:
https://www.researchgate.net/profile...ication_detail
If you live in the northern hemisphere, we are very near to the end of the bat season, before the bats go to their wintering grounds. But if you live in a city, it could be worth going outside when the sun goes down and try to detect the last bats of the mating season doing their courtship calls.
Have fun with the Teensy and with bat detection,
===========
This is the "bible" of bat bioacoustics, if you want to know everything about bats and their calls:
Barataud et al. (2015): Acoustic ecology of European bats: Species identification, study of their habitats and foraging behaviour. -
http://www.nhbs.com/title/199366/aco...-european-bats --> not only for european bat friends, the bioacoustics & methods section is universal
There are many many nice websites on batcalls, here is a subjective choice:
https://www.researchgate.net/profile...ication_detail
If you live in the northern hemisphere, we are very near to the end of the bat season, before the bats go to their wintering grounds. But if you live in a city, it could be worth going outside when the sun goes down and try to detect the last bats of the mating season doing their courtship calls.
https://www.bestellen.bayern.de/application/eshop_app000005?SID=1615997943&ACTIONxSETVAL(artdtl.htm,APGxNODENR:34,AARTxNODENR:193326,USERxA
Building an acoustic recognition database Tadarida-L (Toolbox Animal Detection on Acoustic Recordings(47)
Sendungsbild: Unkraut | Bild: BR
Im Oberrheintal gibt es einen unbekannten Schatz: das größte Grundwasserreservoir Europas. Dieser Grundwasserstrom speist und vernetzt Feuchtgebiete von einzigartiger Schönheit, voller seltener Pflanzen und Tiere. Der preisgekrönte Unterwasserkameramann Serge Dumont zeigt diese unbekannte Welt in atemberaubenden Bildern.
https://www.ardmediathek.de/video/unkraut/grundwasser-leben-aus-der-tiefe/br-fernsehen/Y3JpZDovL2JyLmRlL3ZpZGVvL2RhMWFhODc4LWZhMDMtNGIwYS04NzUwLWNkOTA4NzIyM2ZjZQ/
Sendungsbild: Unkraut | Bild: BR
https://www.ardmediathek.de/video/unkraut/grundwasser-leben-aus-der-tiefe/br-fernsehen/Y3JpZDovL2JyLmRlL3ZpZGVvL2RhMWFhODc4LWZhMDMtNGIwYS04NzUwLWNkOTA4NzIyM2ZjZQ/
PyData London 2018
Bird sounds are complex and fascinating. Can we automatically "understand" them using machine learning? I will describe my academic research into "machine listening" for bird sounds. I'll tell you why it's important, methods we use, Python libraries, open code and open data that you can use. Examples of the latest research, and a successful commercial recognition app (Warblr).
Who's singing? Automatic bird sound recognition with machine learning - Dan Stowell
https://www.youtube.com/watch?v=pzmdOETnhI0
PyData London 2018
https://www.slideshare.net/PyData/whos-singing-automatic-bird-sound-recognition-with-machine-learning-dan-stowell
Bird sounds are complex and fascinating. Can we automatically "understand" them using machine learning? I will describe my academic research into "machine listening" for bird sounds. I'll tell you why it's important, methods we use, Python libraries, open code and open data that you can use. Examples of the latest research, and a successful commercial recognition app (Warblr).
186-NDR Wildes Deutschland ARTE Mediathek
Bats orthoptera sounds on Xeno Canto GBIF
https://www.nlbif.nl/bat-sounds-on-xeno-canto-gbif/ On www.xeno-canto.org a large and diverse community, with professional ornithologists and sound recordists as well as “citizen scientists” and casual observers have brought together a truly global reference database of bird sounds, with over 660.000 sound recordings of more than 10.000 bird species. Over the years the website has become an indispensable tool for everyone interested in bird song worldwide. As an example a Google Scholar search turns up 3260 results (November 1st 2021). The collection is housed on a stable institutional IT infrastructure in the Netherlands maintained by the Dutch GBIF partner Naturalis Biodiversity Center.
Communities of bat researchers and enthusiasts have steadily grown worldwide. Recently, research involving passive acoustic monitoring has surged. In fact a large portion of bat species are rarely sampled using other methods. Accessible, high-quality call libraries for bats sounds are vital for the field to progress. Currently no repository similar to Xeno-canto for bat sounds exists. For example on April 1st 2021 GBIF referred to 490 bat sounds, mostly from Europe and North America. With a larger number of bat species and recordings openly accessible on Xeno-canto, it will be possible to better train algorithms used for automatic species recognition. This will greatly improve our ability to generate baseline information on species diversity for many yet unexplored sites, but also to more effectively monitor bat species or sites of conservation interest. In addition, open access to bat vocalizations will stimulate research at larger geographical, temporal and evolutionary scales.
This project will result in a stable repository for bat sounds (Chiroptera) as an expansion of the Xeno-canto collection on www.xeno-canto.org of bird (Aves) and grasshopper (Orthoptera) sounds. The sounds and metadata will be shared through GBIF and are also available through an API.
Sprinkhaangeluiden ID app "CrickIt"(Android)
De aquilaecologie.sprinkenapp1 herkent automatisch Orthoptera Sounds netzoals Birdnet maar dan voor sprinkhanen en krekels.
https://play.google.com/store/apps/details?id=com.aquilaecologie.sprinkenapp1
Opnamen toevoegen bij waarnemingen wordt erg gewaardeerd want dat verbeterd de herkenning !!
Gewoon Spitskopje - 100%, 0% zuidelijk spitskopje
Zuidelijk Spitskopje - 90%, 9% gewoon spitskopje
Grote Groene Sabelsprinkhaan - 100%, 0% zuidelijk spitskopje
Kleine Groene Sabelsprinkhaan - 100%, 0% rosse sprinkhaan
Wrattenbijter - 89%, 11% bramensprinkhaan
Heidesabelsprinkhaan - 93%, 3% wrattenbijter
Greppelsprinkhaan - 83%, 10% gewoon spitskopje
Bramesprinkhaan - 100%, 0% wrattenbijter
Zadelsprinkhaan - 100%, 0% locomotiefje
Veldkrekel - 100%, 0% huiskrekel
Huiskrekel - 100%, 0% veldkrekel
Boskrekel - 100%, 0% sikkelsprinkhaan
Boomkrekel - 100%, 0% boskrekel
Een zwart wekkertje wordt herkend als een wekkertje. Deze twee soorten zijn ook lastig te onderscheiden. Het verschil zit niet in het spectrum maar in de rust tussen de strofen.
Krasser worden zompsprinkhaan goed worden onderscheiden
Flora Incognita
https://rickmiddelbos.wordpress.com/
https://www.sikkom.nl/actueel/Video-Huisbioloog-Rick-ontdekt-verborgen-onderwaterwereld-bij-het-Stadsstrand-28528635.html
https://www.npostart.nl/collectie/natuur
Kaggle https://www.inaturalist.org/pages/help#cv-taxa https://www.inaturalist.org/pages/help#computer-vision https://www.inaturalist.org/pages/help#cv-select https://www.inaturalist.org/blog/31806-a-new-vision-model FWIW, there's also discussion and some additional charts at https://forum.inaturalist.org/t/psst-new-vision-model-released/10854/11 https://www.inaturalist.org/pages/identification_quality_experiment https://www.inaturalist.org/journal/loarie/10016-identification-quality-experiment-update https://www.inaturalist.org/journal/loarie/9260-identification-quality-experiment-update about a rare species, but the system might still recommend one based on nearby observation https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507 https://github.com/kueda/inaturalist-identification-quality-experiment/blob/master/identification-quality-experiment.ipynb "nearby" means near in space and time The model became more efficient in sedges and grasse the vision model does
https://sites.google.com/visipedia.org/index/publications
https://www.inaturalist.org/blog/31806 new version of the computer vision model
https://www.kaggle.com/c/inaturalist-challenge-at-fgvc-2017 Downloadble Model
https://sites.google.com/view/fgvc4/home
https://github.com/visipedia/inat_comp/blob/master/README.md There are a total of 1,010 species in the dataset, spanning 72 genera, with a combined training and validation set of 268,243 images. The dataset was constructed such that each genera contains at least 10 species, making the dataset inherently fine-grained. The primary difference between the 2019 competition and the 2018 Competition is the way species were selected for the dataset. For the 2019 dataset, we filtered out all species that had insufficient observations. From this reduced set, we filtered out all species that were not members of genera with at least 10 species remaining. This produced a dataset of 72 genera, each with at least 10 species, for a total of 1,010 species. Our aim was to produce a collection of fine-grained problems that are representative of the natural world. We made the evalue metric more strict in 2019, going to top-1 error as opposed to top-3.
https://docs.google.com/spreadsheets/d/1JHn6J_9HBYyN5kaVrH1qcc3VMyxOsV2II8BvSwufM54/edit#gid=0
https://github.com/visipedia/inat_comp#data
Beery, S., Van Horn, G., Perona, P. Recognition in Terra Incognita. The European Conference on Computer Vision (ECCV), 2018. [pdf]
Van Horn, G., Mac Aodha, O., Song, Y., Cui, Y., Sun, C., Shepard, A., Hartwig, A., Perona, P., and Belongie, S. The iNaturalist Species Classification and Detection Dataset. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [pdf]
Van Horn, G., Loarie, S., Belongie, S., and Perona, P. Lean Multiclass Crowdsourcing. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [pdf]
Subject: Links https://www.inaturalist.org/pages/help#cv-taxa https://www.inaturalist.org/pages/help#computer-vision https://www.inaturalist.org/pages/help#cv-select https://www.inaturalist.org/blog/31806-a-new-vision-model FWIW, there's also discussion and some additional charts at https://forum.inaturalist.org/t/psst-new-vision-model-released/10854/11 https://www.inaturalist.org/pages/identification_quality_experiment https://www.inaturalist.org/journal/loarie/10016-identification-quality-experiment-update https://www.inaturalist.org/journal/loarie/9260-identification-quality-experiment-update about a rare species, but the system might still recommend one based on nearby observation https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507 https://github.com/kueda/inaturalist-identification-quality-experiment/blob/master/identification-quality-experiment.ipynb "nearby" means near in space and time The model became more efficient in sedges and grasse the vision model does no
To: Marcel Hospers marcelhospers@gmail.com
https://www.inaturalist.org/pages/help#cv-taxa
https://www.inaturalist.org/pages/help#computer-vision
https://www.inaturalist.org/pages/help#cv-select
https://www.inaturalist.org/blog/31806-a-new-vision-model
FWIW, there's also discussion and some additional charts at
https://forum.inaturalist.org/t/psst-new-vision-model-released/10854/11
https://www.inaturalist.org/pages/identification_quality_experiment
https://www.inaturalist.org/journal/loarie/10016-identification-quality-experiment-update
https://www.inaturalist.org/journal/loarie/9260-identification-quality-experiment-update
about a rare species, but the system might still recommend one based on nearby observation
https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507
https://github.com/kueda/inaturalist-identification-quality-experiment/blob/master/identification-quality-experiment.ipynb
"nearby" means near in space and time
The model became more efficient in sedges and grasse
the vision model does not itself incorporate non-image data other than taxon IDs
b/c because
https://www.inaturalist.org/blog/25510-vision-model-updates ("taxon and region comparisons" 20190614)
https://distill.pub/2020/circuits/zoom-in/ ("connections between neurons")
https://www.inaturalist.org/projects/flora-of-russia/journal/31726
https://www.inaturalist.org/posts/31726-
https://forum.inaturalist.org/t/provide-relevant-geographic-data-confidence-level-accuracy-scores-with-ai-suggestions/9226/2
https://forum.inaturalist.org/t/range-covered-by-the-seen-nearby-feature/2849/5
We’re excited to introduce PUC (Portable Universe Codec),
our AI powered bioacoustics platform.
Packed with dual microphones, WiFi/BLE, GPS, environmental sensors, and a built-in neural engine, all in a weatherproof enclosure, PUC is ready to capture all that nature can throw at it!
https://www.birdweather.com/
https://www.birdweather.com/addons
Our cloud server uses the BirdNET neural network (a joint project between the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology and the Chemnitz University of Technology) to process all audio soundscapes from the PUC, featuring:
In addition to its stereo microphones, PUC is also packed with environmental sensors, including (Temp, Humidity, Pressure, Air Quality, tVOC, CO2, and a Spectral Light Sensor) so you're able to track local weather conditions.
Over 6000 global bird species!
Man-made sources (e.g. fireworks, engine)
Non-avian species (e.g. coyote, dog, fox, squirrel, frogs, insects)
Automatic removal of any soundscapes with human vocal detections
https://www.birdweather.com/addons
Building an acoustic recognition database Tadarida-L (Toolbox Animal Detection on Acoustic Recordings
https://www.raspberrypi.com/news/classify-birds-acoustically-with-birdnet-pi/
https://app.birdweather.com/api/index.html
https://app.birdweather.com/api/v1/index.html
https://app.birdweather.com/data