Policy planning at the FTC [Federal trade commission]: a commissioner [Mayo J. Thompson] who really believes in it?
In: Antitrust law & economics review, Band 6, Heft 4, S. 35-58
ISSN: 0003-6048
6 Ergebnisse
Sortierung:
In: Antitrust law & economics review, Band 6, Heft 4, S. 35-58
ISSN: 0003-6048
In: Antitrust law & economics review, Band 5, S. 19-36
ISSN: 0003-6048
In: Antitrust law & economics review, Band 5, S. 43-58
ISSN: 0003-6048
In: Social policy and society: SPS ; a journal of the Social Policy Association, Band 16, Heft 2, S. 219-236
ISSN: 1475-3073
Declining trust in statistical agencies has recently complicated the endeavour to collect high-quality, timely data that are used to inform US policy and practice. Given this context, understanding how respondents choose to trust particular statistical agencies and their products is incredibly important. This article details a series of cognitive interviews (85) and focus groups (3) used to measure how the US public develops trust for statistical agencies, their statistical products and their use of administrative records. Results show that respondents use two models of trust in their rationale: experience based and cultural‒repertoire based. When respondents did not have experience with a particular institution and/or its product, cultural values including personal liberty, cost-savings and the promotion of social goods (for example, government-sponsored schools and hospitals) were found to influence their motivations to trust or distrust. As a result, appeals to cultural values may have the potential to increase trust among respondents. Familiarity with statistical agencies and their products may also increase respondents' levels of trust.
In: Social science computer review: SSCORE
ISSN: 1552-8286
Open-ended survey questions can enable researchers to gain insights beyond more commonly used closed-ended question formats by allowing respondents an opportunity to provide information with few constraints and in their own words. Open-ended web probes are also increasingly used to inform the design and evaluation of survey questions. However, open-ended questions are more susceptible to insufficient or irrelevant responses that can be burdensome and time-consuming to identify and remove manually, often resulting in underuse of open-ended questions and, when used, potential inclusion of poor-quality data. To address these challenges, we developed and publicly released the Semi-Automated Nonresponse Detection for Survey text (SANDS), an item nonresponse detection approach based on a Bidirectional Transformer for Language Understanding model, fine-tuned using Simple Contrastive Sentence Embedding and targeted human coding, to categorize open-ended text data as valid or likely nonresponse. This approach is powerful in that it uses natural language processing as opposed to existing nonresponse detection approaches that have relied exclusively on rules or regular expressions or used bag-of-words approaches that tend to perform less well on short pieces of text, typos, or uncommon words, often prevalent in open-text survey data. This paper presents the development of SANDS and a quantitative evaluation of its performance and potential bias using open-text responses from a series of web probes as case studies. Overall, the SANDS model performed well in identifying a dataset of likely valid results to be used for quantitative or qualitative analysis, particularly on health-related data. Developed for generalizable use and accessible to others, the SANDS model can greatly improve the efficiency of identifying inadequate and irrelevant open-text responses, offering expanded opportunities for the use of open-text data to inform question design and improve survey data quality.
In: Journal of survey statistics and methodology: JSSAM, Band 9, Heft 2, S. 205-208
ISSN: 2325-0992