So Carcinogens Have Thresholds: How Do We Decide What Exposure Levels Should Be Considered Safe?
In: Risk analysis: an international journal, Band 17, Heft 1, S. 1-3
ISSN: 1539-6924
12 Ergebnisse
Sortierung:
In: Risk analysis: an international journal, Band 17, Heft 1, S. 1-3
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 15, Heft 5, S. 543-543
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 11, Heft 1, S. 11-12
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 6, Heft 2, S. 111-112
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 19, Heft 2, S. 231-247
ISSN: 1539-6924
We investigated the way results of human health risk assessments are used, and the theory used to describe those methods, sometimes called the "NAS paradigm." Contrary to a key tenet of that theory, current methods have strictly limited utility. The characterizations now considered standard, Safety Indices such as "Acceptable Daily Intake,""Reference Dose," and so on, usefully inform only decisions that require a choice between two policy alternatives (e.g., approve a food additive or not), decided solely on the basis of a finding of safety. Riskis characterized as the quotient of one of these Safety Indices divided by an estimate of exposure: a quotient greater than one implies that the situation may be considered safe. Such decisions are very widespread, both in the U. S. federal government and elsewhere. No current method is universal; different policies lead to different practices, for example, in California's "Proposition 65," where statutory provisions specify some practices. Further, an important kind of human health risk assessment is not recognized by this theory: this kind characterizes risk as likelihood of harm, given estimates of exposure consequent to various decision choices. Likelihood estimates are necessary whenever decision makers have many possible decision choices and must weigh more than two societal values, such as in EPA's implementation of "conventional air pollutants." These estimates can not be derived using current methods; different methods are needed. Our analysis suggests changes needed in both the theory and practice of human health risk assessment, and how what is done is depicted.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 30, Heft 2, S. 167-178
ISSN: 1476-4989
AbstractAcross the social sciences, scholars regularly pool effects over substantial periods of time, a practice that produces faulty inferences if the underlying data generating process is dynamic. To help researchers better perform principled analyses of time-varying processes, we develop a two-stage procedure based upon techniques for permutation testing and statistical process monitoring. Given time series cross-sectional data, we break the role of time through permutation inference and produce a null distribution that reflects a time-invariant data generating process. The null distribution then serves as a stable reference point, enabling the detection of effect changepoints. In Monte Carlo simulations, our randomization technique outperforms alternatives for changepoint analysis. A particular benefit of our method is that, by establishing the bounds for time-invariant effects before interacting with actual estimates, it is able to differentiate stochastic fluctuations from genuine changes. We demonstrate the method's utility by applying it to a popular study on the relationship between alliances and the initiation of militarized interstate disputes. The example illustrates how the technique can help researchers make inferences about where changes occur in dynamic relationships and ask important questions about such changes.
In: Network science, Band 9, Heft 1, S. 99-122
ISSN: 2050-1250
AbstractPopulation analyses of functional connectivity have provided a rich understanding of how brain function differs across time, individual, and cognitive task. An important but challenging task in such population analyses is the identification of reliable features that describe the function of the brain, while accounting for individual heterogeneity. Our work is motivated by two particularly important challenges in this area: first, how can one analyze functional connectivity data over populations of individuals, and second, how can one use these analyses to infer group similarities and differences. Motivated by these challenges, we model population connectivity data as a multilayer network and develop the multi-node2vec algorithm, an efficient and scalable embedding method that automatically learns continuous node feature representations from multilayer networks. We use multi-node2vec to analyze resting state fMRI scans over a group of 74 healthy individuals and 60 patients with schizophrenia. We demonstrate how multilayer network embeddings can be used to visualize, cluster, and classify functional regions of the brain for these individuals. We furthermore compare the multilayer network embeddings of the two groups. We identify significant differences between the groups in the default mode network and salience network—findings that are supported by the triple network model theory of cognitive organization. Our findings reveal that multi-node2vec is a powerful and reliable method for analyzing multilayer networks. Data and publicly available code are available at https://github.com/jdwilson4/multi-node2vec.
In: Risk analysis: an international journal, Band 11, Heft 4, S. 633-640
ISSN: 1539-6924
Dose‐response curves were developed for the immobilization response in Daphnia magna to four toxicants. The purpose of this work was to study the effect of the form of the model and the number of concentration levels used on the estimates of typical low‐dose effective concentrations (1%, 5%, 10%). The generalized four‐parameter logistic model was used as the reference. When using 12 concentration levels, one of the logistic family two‐ or three‐parameter models was shown reliably to represent each of these various sets of dose‐response data, and to provide adequate estimates of EC01 and EC05, as well as EC10 and EC50. For two of the toxicants, an asymmetric model was required. When reducing the number of concentrations to five, the EC10 and EC50 were well estimated by the probit model, with acceptable results at the EC05 level.
In: Social networks: an international journal of structural analysis, Band 49, S. 37-47
ISSN: 0378-8733
In: PNAS nexus, Band 1, Heft 3
ISSN: 2752-6542
AbstractEmerging research has begun investigating the neural underpinnings of the biological and psychological differences that drive political ideology, attitudes, and actions. Here, we explore the neurological roots of politics through conducting a large sample, whole-brain analysis of functional connectivity (FC) across common fMRI tasks. Using convolutional neural networks, we develop predictive models of ideology using FC from fMRI scans for nine standard task-based settings in a novel cohort of healthy adults (n = 174, age range: 18 to 40, mean = 21.43) from the Ohio State University Wellbeing Project. Our analyses suggest that liberals and conservatives have noticeable and discriminative differences in FC that can be identified with high accuracy using contemporary artificial intelligence methods and that such analyses complement contemporary models relying on socio-economic and survey-based responses. FC signatures from retrieval, empathy, and monetary reward tasks are identified as important and powerful predictors of conservatism, and activations of the amygdala, inferior frontal gyrus, and hippocampus are most strongly associated with political affiliation. Although the direction of causality is unclear, this study suggests that the biological and neurological roots of political behavior run much deeper than previously thought.
In: Risk analysis: an international journal, Band 13, Heft 4, S. 379-382
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 18, Heft 1, S. 1-2
ISSN: 1539-6924