Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Alternativ können Sie versuchen, selbst über Ihren lokalen Bibliothekskatalog auf das gewünschte Dokument zuzugreifen.
Bei Zugriffsproblemen kontaktieren Sie uns gern.
210811 Ergebnisse
Sortierung:
SSRN
Working paper
In: Organizational research methods: ORM, Band 22, Heft 4, S. 941-968
ISSN: 1552-7425
Research has emphasized the limitations of qualitative and quantitative approaches to studying organizational phenomena. For example, in-depth interviews are resource-intensive, while questionnaires with closed-ended questions can only measure predefined constructs. With the recent availability of large textual data sets and increased computational power, text mining has become an attractive method that has the potential to mitigate some of these limitations. Thus, we suggest applying topic modeling, a specific text mining technique, as a new and complementary strategy of inquiry to study organizational phenomena. In particular, we outline the potentials of structural topic modeling for organizational research and provide a step-by-step tutorial on how to apply it. Our application example builds on 428,492 reviews of Fortune 500 companies from the online platform Glassdoor, on which employees can evaluate organizations. We demonstrate how structural topic models allow to inductively identify topics that matter to employees and quantify their relationship with employees' perception of organizational culture. We discuss the advantages and limitations of topic modeling as a research method and outline how future research can apply the technique to study organizational phenomena.
Front matter -- Front cover -- Impressum -- Inhalt -- Abbildungsverzeichnis -- Tabellenverzeichnis -- Abkürzungsverzeichnis -- Teil I Relevanz und fragestellung -- 1. Die duale Bedeutung von Topic Modeling für die KW -- 1.1 Forschungsleitendes Interesse -- 1.1.1 Algorithmische Themen im Gegenstand der KW -- 1.1.2 Algorithmische Themen in der Methodik der KW -- 1.1.3 Methodologische Problemstellung: Vergleich der manuell-deduktiven und automatisch-induktiven Themenanalyse -- 1.2 Zum Aufbau der Arbeit -- 1.3 Danksagungen -- Teil II Algorithmen und Code -- Einleitung: Zur Bedeutung und Begründung der algorithmischen Logik -- 2. Technologischer Wandel: Die Vierte Industrielle Revolution -- 2.1 Big Data -- 2.1.1 Zwischen Gesellschaftswandel, Datenbankproblem und Pathos -- 2.1.2 Digitalisierung und Datafizierung -- 2.1.3 Algorithmische Logik als Treiber der Automatisierung -- 2.2 Künstliche Intelligenz -- 2.2.1 Wesen und Ziel der KI -- 2.2.2 Geschichte der KI -- 2.2.3 KI-Trends und Zukunftsprognosen -- 2.3 Implikationen für Alltag und Gesellschaft -- 2.3.1 Techno-Social Environment -- 2.3.2 Wie verändern Digitalisierung und Big Data die Choice Architecture? -- 2.3.3 Reverse-Turing-Test: Dehumanisierung im Techno-Social Environment -- 2.3.4 Ethik für Algorithmen -- 2.4 Implikationen für die empirische Forschung -- 2.4.1 Big Data und Algorithmen in der empirischen Forschung -- 2.4.2 Data Science und Computational Social Sciences -- 2.4.3 Kritik an Big Data und Co -- 2.5 Zwischenfazit: Algorithmen und Code in der Makroperspektive -- 2.5.1 Big Data und KI in Gegenstand und Methodik der KW -- 2.5.2 Zentrale Folgerungen für die Computational Social Sciences -- 3. Algorithmische Verarbeitung natürlicher Sprache -- 3.1 Sprache als Zeichensystem (de Saussure) -- 3.1.1 Zentrale Relevanz der Sprache -- 3.1.2 Dyadischer Zeichenbegriff und Weiterführung.
"Beloved for its engaging, conversational style, this valuable book is now in a fully updated second edition that presents the latest developments in longitudinal structural equation modeling (SEM) and new chapters on missing data, the random intercepts cross-lagged panel model (RI-CLPM), longitudinal mixture modeling, and Bayesian SEM. Emphasizing a decision-making approach, leading methodologist Todd D. Little describes the steps of modeling a longitudinal change process. He explains the big picture and technical how-tos of using longitudinal confirmatory factor analysis, longitudinal panel models, and hybrid models for analyzing within-person change. User-friendly features include equation boxes that translate all the elements in every equation, tips on what does and doesn't work, end-of-chapter glossaries, and annotated suggestions for further reading. The companion website provides data sets for the examples--including studies of bullying and victimization, adolescents' emotions, and healthy aging--along with syntax and output, chapter quizzes, and the book's figures. New to This Edition: *Chapter on missing data, with a spotlight on planned missing data designs and the R-based package PcAux. *Chapter on longitudinal mixture modeling, with Whitney Moore. *Chapter on the random intercept cross-lagged panel model (RI-CLPM), with Danny Osborne. *Chapter on Bayesian SEM, with Mauricio Garnier. *Revised throughout with new developments and discussions, such as how to test models of experimental effects. "--
In: Structural equation modeling: a multidisciplinary journal, Band 16, Heft 3, S. 397-438
ISSN: 1532-8007
In: Structural equation modeling: a multidisciplinary journal, Band 23, Heft 4, S. 555-566
ISSN: 1532-8007
In: BIOCON-D-24-01599
SSRN
In: Frontiers in digital humanities, Band 5
ISSN: 2297-2668
In: Advances in Strategic Management, 41
SSRN
SSRN
Working paper
In: Sociological methods and research, Band 16, Heft 1, S. 78-117
ISSN: 1552-8294
Practical problems that are frequently encountered in applications of covariance structure analysis are discussed and solutions are suggested. Conceptual, statistical, and practical requirements for structural modeling are reviewed to indicate how basic assumptions might be violated. Problems associated with estimation, results, and model fit are also mentioned. Various issues in each area are raised, and possible solutions are provided to encourage more appropriate and successful applications of structural modeling.
Unstructured textual data is rapidly growing and practitioners from diverse disciplines are expe- riencing a need to structure this massive amount of data. Topic modeling is one of the most used techniques for analyzing and understanding the latent structure of large text collections. Probabilistic graphical models are the main building block behind topic modeling and they are used to express assumptions about the latent structure of complex data. This dissertation address four problems related to drawing structure from high dimensional data and improving the text mining process. Studying the ebb and flow of ideas during critical events, e.g. an epidemic, is very important to understanding the reporting or coverage around the event or the impact of the event on the society. This can be accomplished by capturing the dynamic evolution of topics underlying a text corpora. We propose an approach to this problem by identifying segment boundaries that detect significant shifts of topic coverage. In order to identify segment boundaries, we embed a temporal segmentation algorithm around a topic modeling algorithm to capture such significant shifts of coverage. A key advantage of our approach is that it integrates with existing topic modeling algorithms in a transparent manner; thus, more sophisticated algorithms can be readily plugged in as research in topic modeling evolves. We apply this algorithm to studying data from the iNeighbors system, and apply our algorithm to six neighborhoods (three economically advantaged and three economically disadvantaged) to evaluate differences in conversations for statistical significance. Our findings suggest that social technologies may afford opportunities for democratic engagement in contexts that are otherwise less likely to support opportunities for deliberation and participatory democracy. We also examine the progression in coverage of historical newspapers about the 1918 influenza epidemic by applying our algorithm on the Washington Times archives. The algorithm is successful in identifying important qualitative features of news coverage of the pandemic. Visually convincing results of data mining algorithms and models is crucial to analyzing and driving conclusions from the algorithms. We develop ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors. ThemeDelta is evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus. Time and location are key parameters in any event; neglecting them while discovering topics from a collection of documents results in missing valuable information. We propose a dynamic spatial topic model (DSTM), a true spatio-temporal model that enables disaggregating a corpus's coverage into location-based reporting, and understanding how such coverage varies over time. DSTM naturally generalizes traditional spatial and temporal topic models so that many existing formalisms can be viewed as special cases of DSTM. We demonstrate a successful application of DSTM to multiple newspapers from the Chronicling America repository. We demonstrate how our approach helps uncover key differences in the coverage of the flu as it spread through the nation, and provide possible explanations for such differences. Major events that can change the flow of people's lives are important to predict, especially when we have powerful models and sufficient data available at our fingertips. The problem of embedding the DSTM in a predictive setting is the last part of this dissertation. To predict events and their locations across time, we present a predictive dynamic spatial topic model that can predict future topics and their locations from unseen documents. We showed the applicability of our proposed approach by applying it on streaming tweets from Latin America. The prediction approach was successful in identify major events and their locations. ; Ph. D.
BASE
In: Methodology in the Social Sciences
"Keywords: LSEM, latent variable, analysis, repeated measures, growth curve models, advanced quantitative methods, graduate course texts, primer, guide, valuable resource, statistical, best book Beloved for its engaging, conversational style, this valuable book is now in a fully updated second edition that presents the latest developments in longitudinal structural equation modeling (SEM) and new chapters on missing data, the random intercepts cross-lagged panel model (RI-CLPM), longitudinal mixture modeling, and Bayesian SEM. Emphasizing a decision-making approach, leading methodologist Todd D. Little describes the steps of modeling a longitudinal change process. He explains the big picture and technical how-tos of using longitudinal confirmatory factor analysis, longitudinal panel models, and hybrid models for analyzing within-person change. User-friendly features include equation boxes that translate all the elements in every equation, tips on what does and doesn't work, end-of-chapter glossaries, and annotated suggestions for further reading. The companion website provides data sets for the examples-including studies of bullying and victimization, adolescents' emotions, and healthy aging-along with syntax and output, chapter quizzes, and the book's figures. New to This Edition: *Chapter on missing data, with a spotlight on planned missing data designs and the R-based package PcAux. *Chapter on longitudinal mixture modeling, with Whitney Moore. *Chapter on the random intercept cross-lagged panel model (RI-CLPM), with Danny Osborne. *Chapter on Bayesian SEM, with Mauricio Garnier. *Revised throughout with new developments and discussions, such as how to test models of experimental effects"
In: Quantitative applications in the social sciences 179
World Affairs Online