AbstractThe six phenoxyalkanoic acid herbicides currently used in the European Union have similar molecular structures. Therefore, we assumed the soil components involved in the adsorption mechanisms of these herbicides to be identical. The values of the adsorption distribution coefficient Kd, obtained via batch experiments involving typical Polish Arenosol, Luvisol, and Chernozem profiles with a native pH of 4.2–7.7, were examined using Lasso regression, as well as adsorption on isolated fractions of humic substances, Al2O3, and goethite. The neutral forms of the herbicides were adsorbed on the surface of fulvic acids available to them, covering soil mesopores with a size of > 2.5 nm. The models revealed that fulvic acids had a lognormal-like distribution in soil pores. Herbicide anions were adsorbed on the pH-dependent sites of Al oxyhydroxides and on the sites created by the Al+3 species adsorbed on the surface of fulvic acids (both sites were active up to pH 7.5), the sites of humic acids associated with the adsorbed Al3+ species, sites of Fe oxyhydroxides (active at pH < 5), and, to a limited extent, sites of humins. Two models describing the adsorption of phenoxyalkanoic acid herbicides in soils were created. A simpler model was based on humic substance fractions and the variables related to the potential acidity of soils. In the more extensive model, humic substance fractions and Al and Fe oxyhydroxide contents were used as predictors, and, where necessary, the predictors were combined with the modified Henderson–Hasselbalch formula to estimate the activity ranges of pH-dependent sorption sites. The study findings revealed that fulvic and humic acids were the main adsorbents of phenoxyalkanoic herbicides in soils, indicating that transporting of the herbicides with dissolved organic matter is an important mechanism of groundwater and surface water contamination with these chemicals.
AbstractMilitary health risk assessors, medical planners, operational planners, and defense system developers require knowledge of human responses to doses of biothreat agents to support force health protection and chemical, biological, radiological, nuclear (CBRN) defense missions. This article reviews extensive data from 118 human volunteers administered aerosols of the bacterial agent Francisella tularensis, strain Schu S4, which causes tularemia. The data set includes incidence of early‐phase febrile illness following administration of well‐characterized inhaled doses of F. tularensis. Supplemental data on human body temperature profiles over time available from de‐identified case reports is also presented. A unified, logically consistent model of early‐phase febrile illness is described as a lognormal dose–response function for febrile illness linked with a stochastic time profile of fever. Three parameters are estimated from the human data to describe the time profile: incubation period or onset time for fever; rise time of fever; and near‐maximum body temperature. Inhaled dose‐dependence and variability are characterized for each of the three parameters. These parameters enable a stochastic model for the response of an exposed population through incorporation of individual‐by‐individual variability by drawing random samples from the statistical distributions of these three parameters for each individual. This model provides risk assessors and medical decisionmakers reliable representations of the predicted health impacts of early‐phase febrile illness for as long as one week after aerosol exposures of human populations to F. tularensis.
AbstractRisks of allergic contact dermatitis (ACD) from consumer products intended for extended (nonpiercing) dermal contact are regulated by E.U. Directive EN 1811 that limits released Ni to a weekly equivalent dermal load of ≤0.5 μg/cm2. Similar approaches for thousands of known organic sensitizers are hampered by inability to quantify respective ACD‐elicitation risk levels. To help address this gap, normalized values of cumulative risk for eliciting a positive ("≥+") clinical patch test response reported in 12 studies for a total of n = 625 Ni‐sensitized patients were modeled in relation to observed ACD‐eliciting Ni loads, yielding an approximate lognormal (LN) distribution with a geometric mean and standard deviation of GMNi = 15 μg/cm2 and GSDNi = 8.0, respectively. Such data for five sensitizers (including formaldehyde and 2‐hydroxyethyl methacrylate) were also ∼LN distributed, but with a common GSD value equal to GSDNi and with heterogeneous sensitizer‐specific GM values each defining a respective ACD‐eliciting potency GMNi/GM relative to Ni. Such potencies were also estimated for nine (meth)acrylates by applying this general LN ACD‐elicitation risk model to respective sets of fewer data. ACD‐elicitation risk patterns observed for Cr(VI) (n = 417) and Cr(III) (n = 78) were fit to mixed‐LN models in which ∼30% and ∼40% of the most sensitive responders, respectively, were estimated to exhibit a LN response also governed by GSDNi. The observed common LN‐response shape parameter GSDNi may reflect a common underlying ACD mechanism and suggests a common interim approach to quantitative ACD‐elicitation risk assessment based on available clinical data.
In: Ribeiro Duarte , A S 2013 , The interpretation of quantitative microbial data : meeting the demands of quantitative microbiological risk assessment . National Food Institute, Technical University of Denmark , Søborg .
Fødevarebårne sygdomme har betydelige helbredsmæssige, sociale, økonomiske og politiske konsekvenser. Kvantitativ mikrobiologisk risikovurdering (QMRA) er et videnskabeligt baseret værktøj, der anvendes til at estimere antallet af sygdomstilfælde hos mennesker efter indtag af en given fødevare kontamineret med en specifik sygdomsfremkaldende mikroorganisme. Værktøjet kan ligeledes anvendes til at vurdere effekten af forskellige kontrolforanstaltninger i produktionen af den givne fødevare. Risikovurderinger benyttes af fødevaremyndigheder til udarbejdelse af regler og vejledninger, der kan mindske risikoen for fødevarebårne sygdomme. Det er derfor nødvendigt, at vurderingerne er så nøjagtige og gennemskuelige som muligt. Forbrugereksponeringen og den deraf følgende mulige infektion med en fødevarebåren bakterie afhænger af flere faktorer, herunder antallet af bakterier, der er tilstede i fødevaren samt den mikrobielle økologi (vækst, overlevelse og krydssmitte), som finder sted i alle trini fødevarekæden. Begge forhold er vigtige input til en QRMA. Antallet af bakterier vil variere naturligt mellem de prøver, der udtages af et parti fødevarer. I en QMRA tages der højde for denne variation ved at beskrive bakterieantallet ved anvendelse af sandsynlighedsfordelinger. Udviklingen i antallet af bakterier i en fødevare gennem produktionsprocessen afhænger både af fødevarens art, håndtering og opbevaringsforhold, og beregnes ved hjælp af prædiktive mikrobiologiske modeller, der bl.a. kan forudsige ændringer bakteriekoncentrationerunder specifikke fysiske og kemiske forhold. Både de valgte sandsynlighedsfordelinger og prædiktive mikrobiologiske modeller bidrager til usikkerheden af en QMRA. Dels er det muligt at vælge mellem flere forskellige alternative fordelinger for bakteriekoncentrationer samt måder at tilpasse fordelinger til aktuelle data; og dels er prædiktive mikrobiologiske modeller oftest baseret på kontrollerede laboratorieforsøg, der måske ikke i tilstrækkelig grad afspejler forholdene i de fødevarer, som forbrugerne indtager. Resultaterne af disse modeller bør derfor valideres med uafhængige data indsamlet fra "rigtige" fødevarer inden de indgår i en QMRA. Det overordnede mål med denne afhandling er at undersøgeforskellige faktorer relateret til kvantitative mikrobiologiske data, som kan påvirkeresultaterne af en QMRA med henblik på at finde løsninger, der kan minimere usikkerheden på risikoestimaterne. Til dette formål er der udviklet en metode, der kan tilpasse en fordeling til mikrobiologiske data og som angiver både et estimat for prævalens og en fordeling for antallet af bakterier (manuskript I). Forskellige sandsynlighedsfordelinger er derefter blevet anvendt til at beskrive bakterietantallet i en simpel QMRA model og de forskellige risikoestimater er blevet sammenlignet (manuskript II). Endelig er nøjagtigheden af resultaterne af de prædiktive mikrobiologiske modeller blevet undersøgt på basis af litteraturdata og sammenlignet med henblik på at identificere faktorer relateret til eksperimentelle data, der kan have afgørende indflydelse på evalueringen af en model (manuskript III). I manuscript I ("Fitting a distribution to microbial counts: making sense of zeroes") er hypotesen, at en manglende adskillelse af "falsk negative" mikrobiologiske tællinger, som opstår ved tilfældighed selvom fødevaren reelt er forurenet, fra "sandt negative" tællinger medfører, at estimater for prævalens og bakterieantal bliver unøjagtige. Sådanne unøjagtigheder kan især have betydning for en QMRA, når det drejer sig om særligt virulente bakterier, der kan opformeres i fødevarekæden. Der er derfor udviklet en metode, der kan tilvejebringe nøjagtige estimater for koncentrationen af bakterier og som kan skelne mellem falske og sande negative bakterietællinger og dermed også give mere nøjagtige prævalensestimater. Metoden demonstrerer, at det, afhængigt af den oprindelige fordeling af bakteriekoncentrationen og den aktuelle detektionsgrænse, kan lede til fejlbehæftede resultater, hvis falske 0-prøver ukritisk tolkes som negative. Den udviklede metode estimerer prævalensen af en forurening i et fødevareparti samt parametrene (middelværdi og standardafvigelse) for fordelingen af bakterieantallet på baggrund af direkte bakterietællinger på agarplader og uden antagelse af en detektionsgrænse. Ved at analysere bakterietællinger fra forurenede og ikke forurenede prøver samlet, kan proportionen af falsk negative tællinger ud af det totale antal negative tællinger estimeres. Metoden frembringer gode estimater over middelværdier, standardafvigelser og prævalenser, i særdeleshed ved lave prævalensniveauer og forventeligt lave standardafvigelser. Undersøgelsen viser, at en af de vigtigste faktorer til en nøjagtig karakterisering afden samlede mikrobiologiske forurening er en korrekt identifikation og adskillelse af sande og falske negative prøver, og at estimater over prævalens og bakteriekoncentrationer er afhængige og at disse derfor skal estimeres samtidigt. I manuskript II ("Impact of microbial count distributions on human health risk estimates") undersøges det, hvilken indflydelse den tilpassede fordelingen for bakteriekoncentrationen har på det endelig risikoestimat. Dette er gjort ved to forskellige scenarier for bakteriekoncentrationer og en række forskellige prævalensniveauer. Fire forskellige parametriske fordelinger er blevet anvendt til at undersøge betydningen af at inddrage tilfældige variationer knyttet til bakterietællinger, påvise forskellen mellem at behandle sandt negative som sådan eller som under en given detektionsgrænse, samt vise vigtigheden af at anvende korrekte antagelser om de underliggende fordelinger for bakteriekoncentrationer. Ved at gennemføre et simuleringseksperiment er det muligt at angive forskellen mellem den forventede risiko og det risikoestimat, der opnås ved at anvende en lognormal, en zero-inflated lognormal, en Poisson-gamma og en zero-inflated Poisson-lognormal fordeling. Metoden, beskrevet i manuskript I, er anvendt til attilpasse den sidstnævnte fordeling. Resultatet viser at valget af sandsynlighedsfordeling til at beskrive bakteriekoncentrationen i fødevaren i detailleddet har betydning for risikoestimatet og afhængerbåde af bakteriekoncentration og prævalens, men at valget generelt betyder mere jo højere prævalensniveauet og koncentrationen er. Anvendelse af zeroinflation har også en tendens til at forbedre nøjagtigheden af risikoskøn. I manuscript III ("Variability and uncertainty in the evaluation of predictive models with literature data – consequences to quantitative microbiological risk assessment") vurderes det, hvordan forskellige vækstvilkår, som anvendt i publicerede datasæt, påvirker resultaterne afen vækstmodel sammenlignet med de resultater, der opnås med de data der blev anvendt til at udvikle selve modellen. Betydningen af antal observationer, temperaturforhold, vandaktivitet og pH, tilstedeværelse eller fravær af mælkesyre i vækstmiljøet, anvendelse af en patogen stamme eller ej, samt typen af vækstmiljø på modellens resultater blev analyseret. Modellens præstationsevne blev målt som DifAf, forskellen mellem modellens nøjagtighedsfaktor udregnet med de data der blev anvendt til at lave modellen (Af original) og en nøjagtighedsfaktor, bestemt på basis af et uafhængigt datasæt (Af evaluation). Undersøgelen er lavet med en "square root-type model" for vækstraten af Escherichia colipå baggrund af fire miljøfaktorer og de samme litteraturdata som tidligere blev anvendt til at evaluere modellen. Det er hypotesen, at Aforiginal, vil afspejle den optimale præstation af modellen, og at DifAfreduceres og bliver mindre variabel jo mere betingelserne bag et uafhængigt datasæt nærmer sig det datasæt, der blevanvendt til at udvikle modellen. Fordelingen af DifAf værdier, opnået på baggrund af forskellige datasæt sammenlignes grafisk og statistisk. Resultaterne indikerer at når anvendelse af prædiktive modeller, der er udviklet under kontrollerede eksperimentelle vilkår, bliver valideret med uafhængige datasæt fra litteraturen, så er det en forudsætning for at minimere variation i model outputtet, at datasættene indeholder et stort antal observationer og at de er baseret på tilsvarende vækstvilkår som den prædiktive model er udviklet under. Ved at mindske denne variation, mindskes også usikkerhed og variation fra de prædiktive modeller i den samlede QMRA analysen, hvilket øger præcisionen af risikoestimatet. Det konkluderes at denne afhandling: bidrager til at afdække hvilken betydning analyse af de mikrobiologiske data kan have på en QMRA, fremlægger en ny og nøjagtig metode til at tilpasse fordelinger til mikrobiologiske data, og foreslår retningslinjer for,hvordan man kan vælge egnede publicerede datasæt til validering af prædiktive modeller for mikrobiel vækst og overlevelse, før de anvendes i en QMRA. Perspektiver for det fremtidige arbejde inkludere validering af metoden udviklet i Manuskript I med data indsamlet fra 'den virkelige verden', og at præsentere metoden som et værktøj til andre forskere fx som en arbejdspakke i statistikprogrammet R. Ligeledes bør man blive enige om en standardiseret metode til rapportering af kvantitative mikrobiologiske data, dettydeligt beskriver dataindsamlingsprocessen. En videreudvikling af arbejdet i Manuskript II vil gøre det muligt, at underbygge konklusionerne om hvilken indflydelse forskellige fordelinger har på det endelige risikoestimat. Som en opfølgning på Manuskript III kan der gennemføres et simuleringsstudie med henblik på undersøge i hvilken grad målrettet udvikling af QMRA metoder og validering af prædiktive modeller er nødvendige for et retvisende risikoestimat. Fremtidige behov i fødevaremikrobiologi og QMRA omfatter udviklingen af egnede statistiske metoder til at analysere data fra de forskellige "omics" teknologier, tilpasning af den nuværende struktur i QMRA modeller, så disse kan håndtere sådanne data, samt vurdering af variation og usikkerhed på disse data. ; Foodborne diseases carry important social, health, political and economic consequences. Quantitative microbiological risk assessment (QMRA) is a science based tool used to estimate the risk that foodborne pathogens pose to human health, i.e. it estimates the number of cases of human foodborne infection or disease due to ingestion of a specific pathogenic microorganism conveyed by specific food products; it is also used to assess the effect of different control measures. In their role of risk managers, public authorities base their policies on the outcome of risk assessmentstudies. Therefore, they need to be transparent and affected by minimum imprecision. The potential exposure to and infection by foodborne microorganisms depend, among other factors, on the microbial concentrations in food and on the microbial behaviour (growth, survival and transfer) along the food chain. Both factors are therefore important inputs in QMRA. Since microbial concentrations vary among different samples of a food lot, probability distributions are used to describe these concentrations in QMRA. As microbial behaviour varies with food storage conditions (because it depends on intrinsic properties of food andextrinsic environmental variables), predictive models of bacterial growth and survival that account for those factors are used in QMRA, to describe expected changes in bacterial concentrations. Both probability distributions and predictive models may contribute to the imprecision of QMRA: on one hand, there are several distribution alternatives available to describe concentrations and several methods to fit distributions to bacterial data; on the other hand predictive models are built based on controlled laboratory experiments of microbial behaviour, andmay not be appropriate to apply in the context of real food. Hence, these models need to be validated with independent data for conditions of real food before use in QMRA. The overall goal of the work presented in this thesis is to study different factors related to quantitative microbial data that may have an impact on the outcome ofQMRA, in order to find appropriate solutions that limit the imprecision of risk estimates. A new method of fitting a distribution to microbial data is developed that estimates both prevalence and distribution of concentrations (manuscript I). Different probability distributions are used to describe concentrations in a simple QMRA model and the risk estimates obtained are compared (manuscript II). The predictive accuracy ofa microbial growth model against different literature datasets are compared in order to identify different factors related to experimental data collection with a relevant impact on the model evaluation process (manuscript III). In manuscript I ("Fitting a distribution to microbial counts: making sense of zeroes") it is hypothesised that when "artificial" zero microbial counts, which originate by chance from contaminated food products, are not separated from "true" zeroes originating from uncontaminated products, the estimates of prevalence and concentration may be inaccurate. Such inaccuracy may have an especially relevant impact in QMRA in situations where highly pathogenic microorganisms are involved and where growth can occur along the food pathway. Hence, a method is developed that provides accurate estimates of concentration parameters and differentiates between artificial and true zeroes, thus also accurately estimating prevalence. It is demonstrated that depending on the original distribution of concentrations and the limit of quantification (LOQ) of microbial enumeration, it may be incorrect to treat artificial zeroes as censored below a quantification threshold. The method that is presented estimates the prevalence of contamination within a food lot and the parameters (mean and standard deviation)characterizing the within-lot distribution of concentrations, without assuming a LOQ, and using raw plate count data as input. Counts resulting both from contaminated and uncontaminated sample units are analysed together, which allows estimating the proportion of artificial zeroes among the total of zero counts. The method yields good estimates of mean, standard deviation and prevalence, especially at low prevalence levels and low expected standard deviation. This study shows that one of the keys to an accurate characterization of the overall microbial contamination is the correct identification and separation of true and artificial zeroes, and that estimation of prevalence and estimation of the distribution of concentrations are interrelated and therefore should be done simultaneously. In manuscript II ("Impact of microbial count distributionson human health risk estimates") the impact of fitting microbial distributions on risk estimates is investigated at two different concentration scenarios and at a range of prevalence levels. Four different parametric distributions are used to investigate the importance of accounting for the randomness in counts, the difference between treating true zeroes as such or as censored below a LOQ and the importance of making the correct assumption about the underlying distribution of concentrations. By running a simulation experiment it is possible to assess the difference between expected risk and the risk estimated with using a lognormal, a zero-inflated lognormal, a Poissongamma and a zero-inflated Poisson-lognormal distribution.The method developed in manuscript I is used in this study to fit the latter. The results show that the impact of the choice of different probability distributions to describe concentrations at retail on risk estimates depends both on the concentration and prevalence levels, but that in general it is larger at high levels of microbial contamination (high prevalence and high concentration). Also, a zeroinflation tends to improve the accuracy of the risk estimates. In manuscript III ("Variability and uncertainty in the evaluation of predictive models with literature data – consequences to quantitative microbiological risk assessment") it is assessed how different growth settings inherent to literature datasets affect the performance of a growth model compared to its performance with the data used to generate it. The effect of the numberof observations, the ranges of temperature, water activity and pH under which observations were made, the presence or absence of lactic acid in the growth environment, the use of a pathogenic or non-pathogenic strain and the type of growth environment on model performance are analysed. Model performance is measured in terms of DifAf- the difference between the accuracy factor (Af) of the model with the data used to generate it and the Af with an independent dataset. The study is performed using a square root-type model for the growth rate of Escherichia coliin response to four environmental factors and literature data that have been previously used to evaluate this model. It is hypothesised that the Afof the model with the data used to generate it reflects the model's best possible performance, and hence DifAfis smaller and less variant when the conditions of an independent dataset are closer to the data that originated the model. The distributions of DifAfvalues obtained with different datasets are compared graphically and statistically. The results suggest that if predictive models developed under controlled experimental conditions are validated against independent datasets collected from published literature, these datasets must contain a high number of observations and be based on a similar experimental growth media in order to reduce the variation of model performance. By reducing this variation, the contribution of the predictive model with uncertainty and variability sources in QMRA also decreases, which affects positively the precision of the risk estimates. To conclude, this thesis contributes to the clarification of the impact that the analysis of microbial data may have in QMRA, provides a new accurate method of fitting a distribution to microbial data, and suggests guidelines for the selection of appropriate published datasets for the validation of predictive models of microbial behaviour, before their use in QMRA. Perspectives of future work include the validation of the method developed in manuscript I with real data, and its presentation as a tool made available to the scientific community by developing, for example, a working package for the statistical software R. Also, the author expects that a standardized way of reporting microbial counts that clearly specifies the steps taken during data collection should be adopted in the future. Extending the work presented on manuscript II will allow obtaining more sound conclusions about the general impact of different frequency distributions on risk estimates. Following manuscript III, a simulation study could help to investigate to what level QMRA-targeted development and validation of predictive models are necessary for the accurate estimation of risk. Future needs in food microbiology and QMRA include the development of appropriate statistical methods to summarize novel data obtained from different "omics" technologies, adaptation of the current structure of QMRA studies to allow them to make the use of such data, and the assessment of the variabilityand uncertainty attending those data.
ABSTRACTThe National Research Council 2009 "Silver Book" panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ∼7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. "Does EPA underestimate cancer risks by ignoring susceptibility differences?," Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10−8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.
Engine research has increasingly focused on emission of sub 23 nm particulates in recent years. Likewise, current legislative efforts are being made for particulate number (PN) emission limits to include this previously omitted size range. In Europe, PN measurement equipment and procedures for regulatory purposes are defined by the particle measurement programme (PMP). Latest regulation drafts for sub 23 nm measurements specify counting efficiencies with a 65% cutoff size at 10 nm (d65) and a minimum of 90% above 15 nm (d90). Even though alternative instruments, such as differential mobility spectrometers (DMS), are widely used in laboratory environments, the interpretation of their sub 23 nm measurements has not yet been widely discussed. For this study, particulate emissions of a 1.0L gasoline direct injection (GDI) engine have been measured with a DMS system for low to medium speeds with two load steps. While the particle size distribution (PSD) at the higher load conditions exhibited a bimodal shape, the PSD for the other conditions was unimodal with a peak position below 30 nm. Lognormal fitting of nucleation and accumulation modes previously yielded results comparable to the established PMP, with d50 and d90 of 23 nm and 41 nm, respectively. However, this approach was found not suitable for sub 23 nm PN measurements due to incorrect assignment of the nucleation and accumulation modes. Recent literature suggests digital filtering of the PSD from DMS. Here, a modified filtering equation is proposed based on the latest legislative proposals. Subsequently, the new filter was compared with filters for both PMP equivalent and sub 23 nm processing of DMS data. Compared to the latter, results with the new filter showed up to 17% higher PN emissions and up to 13.6 nm lower geometric mean diameter (GMD) of the PSD.
Résumé Leridon (Henri). - La fréquence des rapports sexuels : données et analyses de cohérence Les modèles de transmission du Sida et de diffusion de l'épidémie font appel à des variables décrivant les comportements sexuels, comme le nombre de partenaires et la fréquence des rapports. Il est donc important de rassembler des informations sur ces variables, et d'évaluer leur degré d'exactitude. On s'intéresse ici aux données sur la fréquence des rapports collectée dans l'enquête de 1992 sur les comportements sexuels en France (ACSF). La fréquence déclarée pour les quatre dernières semaines est semblable pour les hommes et les femmes (respectivement 8,0 et 7,1) ; elle diminue quand l'âge (après 25 ans) ou la durée d'union s'élève, passant par exemple de 13 par mois au cours de la première année de la vie de couple à moins de 8 après 15 ans. Ces résultats confirment ceux d'enquêtes antérieures, comme l'enquête Simon de 1970. Cette fréquence des quatre dernières semaines est ensuite comparée à la fréquence «habituelle», pour les monopartenaires. La cohérence est très forte, montrant que les répondants ne font guère de différence entre les deux questions. La fréquence déclarée peut aussi être rapprochée de l'ancienneté du dernier rapport. L'inverse de la fréquence, en effet, donne une estimation de l'intervalle entre deux rapports (pour chaque individu), qui constitue un intervalle «fermé»; l'ancienneté du dernier rapport constitue, elle, un intervalle «ouvert». Les conditions de comparabilité de ces deux mesures sont discutées. Sous l'hypothèse que la probabilité d'avoir un rapport est approximativement constante d'un jour à l'autre pour un même individu, on montre que les deux types d'intervalles ont la même espérance mathématique; les données de l'enquête sont en parfait accord avec ce modèle, ce qui permet de conclure que les deux questions donnent des réponses cohérentes. Avec l'hypothèse supplémentaire d'une répartition lognormale des probabilités journalières de rapport des divers individus, il est possible d'estimer la distribution complète des intervalles. Il reste que l'ensemble des informations recueillies pourraient souffrir d'un même type de biais (tendance à la «normalisation» des comportements déclarés), résultant en une surestimation de la cohérence des données et, peut-être, de la fréquence habituelle des rapports.
WHO guidelines as well as the current french legislation upper threshold are frequently exceeded in the Arve Valley. The aim of this thesis was to study the feasibility and defining the methodology for a study on short-term effect of atmospheric pollution in the Arve Valley. The method proposed is a time series analysis of hospital admissions to the emergency department in Sallanches. The study period was from 2007-2015, during which daily admissions of pollution-related pathologies was determined. Daily levels of PM10, NO, NO², O3 were collected. Number of tourist nights was used as an adjustment factor in the analysis of the time series to account for tourist seasonal fluctuations. Weather conditions were collected. We found that daily events followed a generalized linear model by a lognormal and negative binomial distribution as the most suitable statistical model. Concerning cardiovascular and cerebrovascular disease, the number of daily events was: 1 (25th and 75th centile, 0-2), and 0 (25th and 75th centile 0-1). Concerning respiratory disease, the average number of daily events was 2 (25th and 75th centile 1-4). The Arve Valley small population size does not allow us to have enough admissions for a statistically significant analysis of short term pollution effect on emergency department admissions, with regards to cardiovascular and cerebrovascular disease. Using information about tourist activity for statistical analysis is a novel approach. Concerning the Arve Valley, our model is pertinent but unfortunately it does not have enough statistical power to draw any conclusions. Increasing the length of the study time could increase statistical power. ; Les valeurs guides de l'OMS et les valeurs réglementaires françaises sont régulièrement dépassés dans la vallée de l'Arve. Le but de la thèse était d'étudier la faisabilité et définir la méthodologie d'une étude de la pollution atmosphérique en tenant compte des spécificités de la vallée de l'Arve. La méthodologie proposée est une analyse par séries temporelles des données d'admissions aux urgences de Sallanches entre 2007-2015. Les taux de polluants sont: PM10, NO, NO2, et O3. Le nombre de nuitées touristiques est pris en compte comme facteur d'ajustement dans l'analyse par séries temporelles pour corriger les fluctuations de la taille de la population à risque. Les données météorologiques ont été recueillies. La distribution des événements de santé a été caractérisée, et a permis d'identifier un modèle linéaire généralisé avec un lien logarithmique et une distribution négative-binomiale comme étant le modèle statistique le plus adapté. Concernant les admissions pour motif cardio- et neurovasculaire, le nombre d'évènements journaliers étaient en médiane 1 (25ème, 75ème centile, 0-2) et 0 (25ème, 75ème centile, 0-1). Concernant les admissions pour motif respiratoire, le nombre médian d'événements était de 2 (25ème, 75ème centile, 1-4). La zone restreinte et la densité de population faible ne permettent pas d'avoir des effectifs suffisants pour obtenir une puissance statistique permettant une analyse fiable des effets à court terme de la pollution atmosphérique, au moins pour motif cardio- et neurovasculaire. L'intégration des données touristiques dans le modèle statistique est innovante. Augmenter la période d'étude permettrait un gain de puissance limité.
WHO guidelines as well as the current french legislation upper threshold are frequently exceeded in the Arve Valley. The aim of this thesis was to study the feasibility and defining the methodology for a study on short-term effect of atmospheric pollution in the Arve Valley. The method proposed is a time series analysis of hospital admissions to the emergency department in Sallanches. The study period was from 2007-2015, during which daily admissions of pollution-related pathologies was determined. Daily levels of PM10, NO, NO², O3 were collected. Number of tourist nights was used as an adjustment factor in the analysis of the time series to account for tourist seasonal fluctuations. Weather conditions were collected. We found that daily events followed a generalized linear model by a lognormal and negative binomial distribution as the most suitable statistical model. Concerning cardiovascular and cerebrovascular disease, the number of daily events was: 1 (25th and 75th centile, 0-2), and 0 (25th and 75th centile 0-1). Concerning respiratory disease, the average number of daily events was 2 (25th and 75th centile 1-4). The Arve Valley small population size does not allow us to have enough admissions for a statistically significant analysis of short term pollution effect on emergency department admissions, with regards to cardiovascular and cerebrovascular disease. Using information about tourist activity for statistical analysis is a novel approach. Concerning the Arve Valley, our model is pertinent but unfortunately it does not have enough statistical power to draw any conclusions. Increasing the length of the study time could increase statistical power. ; Les valeurs guides de l'OMS et les valeurs réglementaires françaises sont régulièrement dépassés dans la vallée de l'Arve. Le but de la thèse était d'étudier la faisabilité et définir la méthodologie d'une étude de la pollution atmosphérique en tenant compte des spécificités de la vallée de l'Arve. La méthodologie proposée est une analyse par séries temporelles des données d'admissions aux urgences de Sallanches entre 2007-2015. Les taux de polluants sont: PM10, NO, NO2, et O3. Le nombre de nuitées touristiques est pris en compte comme facteur d'ajustement dans l'analyse par séries temporelles pour corriger les fluctuations de la taille de la population à risque. Les données météorologiques ont été recueillies. La distribution des événements de santé a été caractérisée, et a permis d'identifier un modèle linéaire généralisé avec un lien logarithmique et une distribution négative-binomiale comme étant le modèle statistique le plus adapté. Concernant les admissions pour motif cardio- et neurovasculaire, le nombre d'évènements journaliers étaient en médiane 1 (25ème, 75ème centile, 0-2) et 0 (25ème, 75ème centile, 0-1). Concernant les admissions pour motif respiratoire, le nombre médian d'événements était de 2 (25ème, 75ème centile, 1-4). La zone restreinte et la densité de population faible ne permettent pas d'avoir des effectifs suffisants pour obtenir une puissance statistique permettant une analyse fiable des effets à court terme de la pollution atmosphérique, au moins pour motif cardio- et neurovasculaire. L'intégration des données touristiques dans le modèle statistique est innovante. Augmenter la période d'étude permettrait un gain de puissance limité.
Abstract In this study, we systematically characterized the airborne dust generated from grinding engineered and natural stone products using a laboratory testing system designed and operated to collect representative respirable dust samples. Four stone samples tested included two engineered stones consisting of crystalline silica in a polyester resin matrix (formulations differed with Stones A having up to 90wt% crystalline silica and Stone B up to 50wt% crystalline silica), an engineered stone consisting of recycled glass in a cement matrix (Stone C), and a granite. Aerosol samples were collected by respirable dust samplers, total dust samplers, and a Micro-Orifice Uniform Deposit Impactor. Aerosol samples were analyzed by gravimetric analysis and x-ray diffraction to determine dust generation rates, crystalline silica generation rates, and crystalline silica content. Additionally, bulk dust settled on the floor of the testing system was analyzed for crystalline silica content. Real-time particle size distributions were measured using an Aerodynamic Particle Sizer. All stone types generated similar trimodal lognormal number-weighted particle size distributions during grinding with the most prominent mode at an aerodynamic diameter of about 2.0-2.3 μm, suggesting dust formation from grinding different stones is similar. Bulk dust from Stone C contained no crystalline silica. Bulk dust from Stone A, Stone B, and granite contained 60, 23, and 30wt% crystalline silica, respectively. In Stones A and B, the cristobalite form of crystalline silica was more plentiful than the quartz form. Only the quartz form was detected in granite. The bulk dust, respirable dust, and total dust for each stone had comparable amounts of crystalline silica, suggesting that crystalline silica content in the bulk dust could be representative of that in respirable dust generated during grinding. Granite generated more dust per unit volume of material removed than the engineered stones, which all had similar normalized dust generation rates. Stone A had the highest normalized generation rates of crystalline silica, followed by granite, Stone B, and Stone C (no crystalline silica), which likely leads to the same trend of respirable crystalline silica (RCS) exposure when working with these different stones. Manufacturing and adoption of engineered stone products with formulations such as Stone B or Stone C could potentially lower or eliminate RCS exposure risks. Combining all the effects of dust generation rate, size-dependent silica content, and respirable fraction, the highest normalized generation rate of RCS consistently occurs at 3.2-5.6 µm for all the stones containing crystalline silica. Therefore, removing particles in this size range near the generation sources should be prioritized when developing engineering control measures.