The subject-related literature provided information about the skills, education, and formal competencies required to join teams working on the innovation process. According to findings presented in this article, the previous studies have investigated insufficiently the gender-related issues in the decisions of managers who involve specialists in the innovation process. Thus, the purpose of this research was to identify, examine, and describe differences in the participation of men and women in the innovation process, considering their personal characteristics, attitudes, and behaviours. The research covered 1,164 innovative companies – beneficiaries of the European Union Cohesion Policy of 2007–2013. The survey was distributed independently to women and men participating in innovative activities in the researched companies. Two independent responses were received from each company; thus, two independent data samples were created. Both data composition and preliminary analysis adhere to the requirements of Principal Component Analysis. The results allow for the new design proposal to increase the effectiveness of teams working on innovation-focused tasks. In addition to education and experience, managers can now consider personal characteristics and better select women and men to drive innovation.
The subject-related literature provided information about the skills, education, and formal competencies required to join teams working on the innovation process. According to findings presented in this article, the previous studies have investigated insufficiently the gender-related issues in the decisions of managers who involve specialists in the innovation process. Thus, the purpose of this research was to identify, examine, and describe differences in the participation of men and women in the innovation process, considering their personal characteristics, attitudes, and behaviours. The research covered 1,164 innovative companies – beneficiaries of the European Union Cohesion Policy of 2007–2013. The survey was distributed independently to women and men participating in innovative activities in the researched companies. Two independent responses were received from each company; thus, two independent data samples were created. Both data composition and preliminary analysis adhere to the requirements of Principal Component Analysis. The results allow for the new design proposal to increase the effectiveness of teams working on innovation-focused tasks. In addition to education and experience, managers can now consider personal characteristics and better select women and men to drive innovation.
ABSTRACT The United States Army currently uses after action reviews (AARs) to give personnel feedback on their performance. However, due to the growing use of geographically distributed teams, the traditional AAR, with participants and a moderator in the same room, is becoming difficult; therefore, distributed AARs are becoming a necessity. However, distributed AARs have not been thoroughly researched. To determine what type of distributed AARs would best facilitate team training in distributed Army operations, feedback media platforms must be compared. This research compared three types of AARs, which are no AAR, teleconference AAR, and teleconference AAR with visual feedback, to determine if there are learning differences among these conditions. Participants completed three search missions and received feedback between missions from one of these conditions. Multiple ANOVAs were conducted to compare these conditions and trials. Results showed that overall the teleconference AAR with visual feedback improved performance the most. A baseline, or no AAR, resulted in the second highest improvement, and the teleconference condition resulted in the worst overall performance. This study has implications for distributed military training and feedback, as well as other domains that use distributed training and feedback. ; 2008-12-01 ; Ph.D. ; Sciences, Department of Psychology ; Doctorate ; This record was generated from author submitted information.
AbstractFor most staff, the most challenging leadership role they will play is their first. This chapter describes a research‐based training program being developed to help these leaders increase their effectiveness in this first role.
How has social work changed over the years? What are some of the best social work teams doing differently to meet the complex practical and emotional needs of service users? What practical tools and approaches can social work managers implement with their teams? Dr. Judy Foster examines good social work practice and the supporting factors that are essential to underpin social work teams - coherent policies; well-qualified and motivated staff; good management support structures; delegated autonomy and discretion for social workers; and mental space to allow reflective and creative problem solving.
1. Social work : the modern era -- 2. Support for different service users -- 3. Engagin with service users -- 4. Beneath the surface of three teams -- 5. Methodology used to study the tree teams -- 6. The need for a coherent policy framework -- 7. Professional skills and development -- 8. Management structures -- 9. Maximising autonomy -- 10. Mental space to think reflectively -- 11. Conclusions -- 12. What now?
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Packed with practical information, advice and case studies, this book shows engineering team leaders how to get up to speed rapidly and efficiently, providing many useful and practical examples of real-life scenarios
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Thomas Brailey shares takeaways from his Catalyst training project, which involved onboarding reproducible workflows for members of the J-PAL Payments and Governance Research Program. Check out the training materials developed as part of the project and read on to learn more! This post was originally published on the CEGA-managed Berkeley Initiative for Transparency for Social Sciences (BITSS) blog here.Campaign Creators via UnsplashHolding all else equal, ensuring a reproducible and transparent research pipeline is more straightforward with fewer team members. When we discuss achieving reproducible social science in the abstract, there are four broad steps that need clear documentation: 1) obtaining the data; 2) cleaning and wrangling the data; 3) analyzing and visualizing the data, and 4) archiving or releasing the data to the public. With a few principal investigators and research assistants to collect and work on the data, this process has been, in my experience, relatively straightforward. However, ensuring a reproducible workflow becomes markedly more tricky when the project has many team members or is integrated into non-academic bodies such as non-profits or governments. Such organizations face an uphill battle in keeping to the ground rules of transparent and ethical research, especially if their partners do not emphasize the norms of transparent social science."[E]nsuring a reproducible workflow becomes markedly more tricky when the project has many team members or is integrated into non-academic bodies such as non-profits or governments."One might assume that whatever works for a small research team simply scales up for larger teams, but I would argue that far more care needs to be taken with the latter. This is because individual team members will have different levels of exposure to reproducible practices, expectations of the research process, and deliverables and responsibilities. Does non-analysis code (e.g., back-checks, logic checks, cleaning, and recoding code) need to be treated the same as analysis code, even though it won't get included in a manuscript's replication package? Do policy reports or updates for government officials need to emphasize replicability, even if those industries are not placing the same emphasis on transparency as in academia? The answers to these questions, I believe, are absolutely, yes. With that said, there appears to be very little literature focusing on this particular aspect of reproducible social science, so I will discuss some concrete options to ensure transparency in large research teams (this guide offers a fantastic overview of the whole research pipeline for large teams but does not focus on the interplay between, and challenges faced by the whole team.).First, it is important to ensure that all code is version controlled, irrespective of what it does or who it is for. The industry standard (at least in political science) version control software is GitHub, and there are plenty of useful guides for getting this setup. Broadly speaking, each project should be stored as a single repository, with separate folders for cleaning, analysis, and replication code. Each researcher should create their pull request when working on a specific task, then assign another RA to review the changes before merging them into the main branch. Beyond reproducibility, this method ensures accountability among researchers and allows teams to see all changes made to code files from the beginning of time (e.g., Dropbox only allows version history tracking for 180 days). Datasets can be stored on GitHub, but it is not necessary to do this, given that there usually isn't a reason to overwrite a raw dataset. There also exist several trusted data storage sites which guarantee permanence and catalog stored data. Documents (.word, .tex, .pdf, etc.) can be stored on GitHub and version controlled, but it is not considered industry standard to do so. A bifurcated system where all code is version controlled and non-code files are kept in shared storage space can work well for large research teams, though for simplicity, storing all files on GitHub (e.g., linked through the repositories Wiki page) might be helpful.Second, within this reproducible framework, it is important to ensure that cleaning and analysis are kept parsimonious and well-documented. The findings that you publish and present to governments may well be replicable, but if it is based on bad analysis, then it is meaningless. A 2015 PNAS article suggests that the best way to prevent replicable, but poor analysis is to "increase the number of trained data analysts in the scientific community and […] identify statistical software and tools that can be shown to improve reproducibility and replicability of studies". Having a well-documented standard for conducting data analysis and data visualization that is uniform across the organization helps thwart potential mistakes or misleading results."The findings that you publish and present to governments may well be replicable, but if it is based on bad analysis, then it is meaningless."Third, large research teams should encourage non-academic entities with whom they interact to publish codebooks and thorough documentation accompanying any data that they share. Even if these data are not to be shared with the broader public, it is important for the research team to know exactly how the data were generated. It is exciting to see organizations such as J-PAL focus on bridging the gap between their survey experiments and the administrative data they use for analysis. J-PAL's Innovations in Data and Experiments for Action (IDEA) Initiative "supports governments, firms, and non-profit organizations […] who want to make their administrative data accessible in a safe and ethical way". With a survey, the research team has full control over the instrument and knows exactly how each variable is generated, but it is just as important to verify the validity of any external data used for analysis because bad data, like the bad analysis practices discussed above, cause misleading results.Fourth, it can be very helpful to have at least one team member, or an outside consultant, who remains up-to-date on the latest reproducible science practices to monitor the codebase and train the team members. This ensures that all researchers working with code and data can easily collaborate in a single repository. It is vital that all team members, even those who are not in direct contact with code and data, are aware of the importance of reproducible best practices and have exposure to the version control software that their team uses.In this post, I have outlined some of the challenges faced by large research teams with regard to ensuring transparency throughout their research pipeline. I have also pointed to a few potentially useful practices that can help these diverse and complex organizations adhere to the tenets of reproducible social science. For those who are interested, all of our team's onboarding materials can be found in our dedicated Open Science Framework repository. Want to share your experience and helpful resources for collaborating in large teams? Get in touch!Ensuring Reproducibility in Large Research Teams was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
Most contemporary organizations use management teams to manage and coordinate their businesses at all levels of the organizational hierarchy. Management teams typically set overall goals, strategies, and priorities, making vital organizational decisions. They discuss issues, solve problems, offer advice, and ensure various processes and units are aligned and interact efficiently. Although management teams are vital for overall organizational performance, research indicates thatthey are largely underused and less effective than their potential would suggest for value creation. This book provides a research-based and practical model of the characteristics of effective management teams. It looks in depth at each factor of the model, discusses the supporting research, provides examples of how the factors influence the work and effectiveness of management teams, and shares tips and tools for successfully working with management team development. It provides researchers, academics, and students of organizational behavior with an overview of the variables that empirical research has found to be robustly related to management team effectiveness and will enable leaders and management consultants to develop more effective management teams.