Expert and lay judgements of danger and recklessness in adventure sports
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 26, Heft 2, S. 133-146
ISSN: 1466-4461
5 Ergebnisse
Sortierung:
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 26, Heft 2, S. 133-146
ISSN: 1466-4461
In: Journal of multi-criteria decision analysis, Band 12, Heft 4-5, S. 261-271
ISSN: 1099-1360
AbstractThis paper concerns the integration of goal programming and scenario planning as an aid to decision making under uncertainty. Goal programming as a methodology emphasises the resolution of conflict among criteria; scenario planning focuses on the treatment of uncertainty relating to future states of the world. Integrating the two methodologies is based on the simple formulation of a super‐goal programme consisting of one scenario‐specific goal program in each scenario. Issues relating to the structuring of the super‐problem, aggregation both within and over scenarios, and the incorporation of probabilistic information are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
ED is supported by a research chairship from the African Institute for Mathematical Sciences South Africa. This work was carried out with the aid of a grant from the International Development Research Centre, Ottawa, Canada , www.idrc.ca , and with financial support from the Government of Canada, provided through Global Affairs Canada (GAC) , www.international.gc.ca . This work was supported by funding from Microsoft's AI for Earth program. ; Progress in deep learning, more specifically in using convolutional neural networks (CNNs) for the creation of classification models, has been tremendous in recent years. Within bioacoustics research, there has been a large number of recent studies that use CNNs. Designing CNN architectures from scratch is non-trivial and requires knowledge of machine learning. Furthermore, hyper-parameter tuning associated with CNNs is extremely time consuming and requires expensive hardware. In this paper we assess whether it is possible to build good bioacoustic classifiers by adapting and re-using existing CNNs pre-trained on the ImageNet dataset – instead of designing them from scratch, a strategy known as transfer learning that has proved highly successful in other domains. This study is a first attempt to conduct a large-scale investigation on how transfer learning can be used for passive acoustic monitoring (PAM), to simplify the implementation of CNNs and the design decisions when creating them, and to remove time consuming hyper-parameter tuning phases. We compare 12 modern CNN architectures across 4 passive acoustic datasets that target calls of the Hainan gibbon Nomascus hainanus, the critically endangered black-and-white ruffed lemur Varecia variegata, the vulnerable Thyolo alethe Chamaetylas choloensis, and the Pin-tailed whydah Vidua macroura. We focus our work on data scarcity issues by training PAM binary classification models very small datasets, with as few as 25 verified examples. Our findings reveal that transfer learning can result in up to 82% F1 score while keeping ...
BASE
Funding: Scottish Government (Grant Number(s): Marine Mammal Scientific Support Research Program); Homebrew Films; National Research Foundation of South Africa (Grant Number(s): 105782, 90782). ; Video data are widely collected in ecological studies, but manual annotation is a challenging and time‐consuming task, and has become a bottleneck for scientific research. Classification models based on convolutional neural networks (CNNs) have proved successful in annotating images, but few applications have extended these to video classification. We demonstrate an approach that combines a standard CNN summarizing each video frame with a recurrent neural network (RNN) that models the temporal component of video. The approach is illustrated using two datasets: one collected by static video cameras detecting seal activity inside coastal salmon nets and another collected by animal‐borne cameras deployed on African penguins, used to classify behavior. The combined RNN‐CNN led to a relative improvement in test set classification accuracy over an image‐only model of 25% for penguins (80% to 85%), and substantially improved classification precision or recall for four of six behavior classes (12–17%). Image‐only and video models classified seal activity with very similar accuracy (88 and 89%), and no seal visits were missed entirely by either model. Temporal patterns related to movement provide valuable information about animal behavior, and classifiers benefit from including these explicitly. We recommend the inclusion of temporal information whenever manual inspection suggests that movement is predictive of class membership. ; Publisher PDF ; Peer reviewed
BASE
Fieldwork was funded by an Arcus Foundation grant to STT and a Wildlife Acoustics grant to JVB. ID is supported in part by funding from the National Research Foundation of South Africa (Grant ID 90782, 105782). ED is supported by a postdoctoral fellowship from the African Institute for Mathematical Sciences South Africa, Stellenbosch University and the Next Einstein Initiative. This work was carried out with the aid of a grant from the International Development Research Centre, Ottawa, Canada (www.idrc.ca), and with financial support from the Government of Canada, provided through Global Affairs Canada (GAC; www.international.gc.ca). ; Extracting species calls from passive acoustic recordings is a common preliminary step to ecological analysis. For many species, particularly those occupying noisy, acoustically variable habitats, the call extraction process continues to be largely manual, a time-consuming and increasingly unsustainable process. Deep neural networks have been shown to offer excellent performance across a range of acoustic classification applications, but are relatively underused in ecology. We describe the steps involved in developing an automated classifier for a passive acoustic monitoring project, using the identification of calls of the Hainan gibbon Nomascus hainanus, one of the world's rarest mammal species, as a case study. This includes preprocessing-selecting a temporal resolution, windowing and annotation; data augmentation; processing-choosing and fitting appropriate neural network models; and post-processing-linking model predictions to replace, or more likely facilitate, manual labelling. Our best model converted acoustic recordings into spectrogram images on the mel frequency scale, using these to train a convolutional neural network. Model predictions were highly accurate, with per-second false positive and false negative rates of 1.5% and 22.3%. Nearly all false negatives were at the fringes of calls, adjacent to segments where the call was correctly identified, so that very few calls were missed altogether. A post-processing step identifying intervals of repeated calling reduced an 8-h recording to, on average, 22 min for manual processing, and did not miss any calling bouts over 72 h of test recordings. Gibbon calling bouts were detected regularly in multi-month recordings from all selected survey points within Bawangling National Nature Reserve, Hainan. We demonstrate that passive acoustic monitoring incorporating an automated classifier represents an effective tool for remote detection of one of the world's rarest and most threatened species. Our study highlights the viability of using neural networks to automate or greatly assist the manual labelling of data collected by passive acoustic monitoring projects. We emphasize that model development and implementation be informed and guided by ecological objectives, and increase accessibility of these tools with a series of notebooks that allow users to build and deploy their own acoustic classifiers. ; Publisher PDF ; Peer reviewed
BASE