Mastering Data Quality
In: Business Analysis for Business Intelligence, S. 271-282
41765 Ergebnisse
Sortierung:
In: Business Analysis for Business Intelligence, S. 271-282
In: OECD Handbook for Internationally Comparative Education Statistics, S. 121-130
SSRN
Uncertainty over the data quality of Volunteered Geographic Information (VGI) is the largest barrier to the use of this data source by National Mapping Agencies (NMAs) and other government bodies. A considerable body of literature exists that has examined the quality of VGI as well as proposed methods for quality assessment. The purpose of this chapter is to review current data quality indicators for geographic information as part of the ISO 19157 (2013) standard and how these have been used to evaluate the data quality of VGI in the past. Tese indicators include positional, thematic and temporal accuracy, completeness, logical consistency and usability. Additional indicators that have been proposed for VGI are then presented and discussed. In the final section of the chapter, the idea of integrated indicators and workflows of quality assurance that combine many assessment methods into a filtering system is highlighted as one way forward to improve confidence in VGI. ; COST action TD1202 (Mapping and the Citizen Sensor)
BASE
In: Evaluation and program planning: an international journal, Band 14, Heft 4, S. 307-318
ISSN: 0149-7189
Presently, we are well aware that poor quality data is costing large amounts of money to corporations all over the world. Nevertheless, little research has been done about the way Organizations are dealing with data quality management and the strategies they are using. This work aims to find some answers to the following questions: which business drivers motivate the organizations to engage in a data quality management initiative?, how do they implement data quality management? and which objectives have been achieved, so far? Due to the kind of research questions involved, a decision was made to adopt the use of multiple exploratory case studies as research strategy [32]. The case studies were developed in a telecommunications company (MyTelecom), a public bank (PublicBank) and in the central bank (CentralBank) of one European Union Country. The results show that the main drivers to data quality (DQ) initiatives were the reduction in non quality costs, risk management, mergers, and the improvement of the company's image among its customers, those aspects being in line with literature [7, 8, 20]. The commercial corporations (MyTelecom and PublicBank) began their DQ projects with customer data, this being in accordance with literature [18], while CentralBank, which mainly works with analytical systems, began with data source metadata characterization and reuse. None of the organizations uses a formal DQ methodology, but they are using tools for data profiling, standardization and cleaning. PublicBank and CentralBank are working towards a Corporate Data Policy, aligned with their Business Policy, which is not the case of MyTelecom. The findings enabled us to prepare a first draft of a "Data Governance strategic impact grid", adapted from Nolan& MacFarlan IT Governance strategic impact grid [17], this framework needing further empirical support.
BASE
In: European Network on Longitudinal Studies on Individual Development [3,1993]
In: Kölner Zeitschrift für Soziologie und Sozialpsychologie: KZfSS, Band 44, Heft 1, S. 181-185
ISSN: 0023-2653
Today, it is a well known fact that poor quality data is costing large amounts of money to corporations all over the world. Despite the increasing research on methods, concepts, and tools for data quality (DQ) assessment and improvement, little has been done about corporate DQ management. The purpose of this research is to understand the nature and complexity of corporate DQ management, through various perspectives. These include the various kind of sponsorship, type and level of collaboration between business and IS/IT, organizational position of the DQ management team, scope of the DQ initiatives, roles, services provided, DQ methodologies, techniques and tools in use, etc. This paper presents, analyzes and discusses a single pilot exploratory case study, undertaken in a fixed and mobile telecommunications company in a European Union Country. The purpose of this case study is to check the validity of some initial propositions, and eventually find new ones, to be used in a subsequent multiple-case study, in order to provide an in-depth understanding of the corporate DQ management phenomenon.
BASE
In: The American journal of sociology, Band 70, Heft 1, S. 131-131
ISSN: 1537-5390
In: Evaluation and Program Planning, Band 14, Heft 4, S. 307-318
This article describes a Los Angeles-based website that collects volunteered geographic information (VGI) on outdoor advertising using the Google Street View interface. The Billboard Map website was designed to help the city regulate signage. The Los Angeles landscape is thick with advertising, and the city efforts to count total of signs has been stymied by litigation and political pressure. Because outdoor advertising is designed to be seen, the community collectively knows how many and where signs exist. As such, outdoor advertising is a perfect subject for VGI. This paper analyzes the Los Angeles community's entries in the Billboard Map website both quantitatively and qualitatively. I find that members of the public are well able to map outdoor advertisements, successfully employing the Google Street View interface to pinpoint sign locations. However, the community proved unaware of the regulatory distinctions between types of signs, mapping many more signs than those the city technically designates as billboards. Though these findings might suggest spatial data quality issues in the use of VGI for municipal record-keeping, I argue that the Billboard Map teaches an important lesson about how the public's conceptualization of the urban landscape differs from that envisioned by city planners. In particular, I argue that community members see the landscape of advertising holistically, while city agents treat the landscape as a collection of individual categories. This is important because, while Los Angeles recently banned new off-site signs, it continues to approve similar signs under new planning categories, with more in the works.
BASE
In: Urban Planning, Band 1, Heft 2, S. 75-87
This article describes a Los Angeles-based website that collects volunteered geographic information (VGI) on outdoor advertising using the Google Street View interface. The Billboard Map website was designed to help the city regulate signage. The Los Angeles landscape is thick with advertising, and the city efforts to count total of signs has been stymied by litigation and political pressure. Because outdoor advertising is designed to be seen, the community collectively knows how many and where signs exist. As such, outdoor advertising is a perfect subject for VGI. This paper analyzes the Los Angeles community's entries in the Billboard Map website both quantitatively and qualitatively. I find that members of the public are well able to map outdoor advertisements, successfully employing the Google Street View interface to pinpoint sign locations. However, the community proved unaware of the regulatory distinctions between types of signs, mapping many more signs than those the city technically designates as billboards. Though these findings might suggest spatial data quality issues in the use of VGI for municipal record-keeping, I argue that the Billboard Map teaches an important lesson about how the public's conceptualization of the urban landscape differs from that envisioned by city planners. In particular, I argue that community members see the landscape of advertising holistically, while city agents treat the landscape as a collection of individual categories. This is important because, while Los Angeles recently banned new off-site signs, it continues to approve similar signs under new planning categories, with more in the works.
In: Journal of enterprise information management: an international journal, Band 24, Heft 3, S. 288-303
ISSN: 1758-7409
PurposeWhile few would disagree that high data quality is a precondition for the efficiency of a company, this remains an area to which many companies do not give adequate attention. Thus, this paper aims to identify which are the most important barriers preventing companies from achieving high data quality. By improving awareness of barriers on which to concentrate, companies are put in a better position to achieve high quality data.Design/methodology/approachFirst, a literature review of data quality and data quality barriers is carried out. Based on this literature review, the paper identifies a set of overall barriers to ensuring high data quality. The significance of these barriers is investigated by a questionnaire study, which includes responses from 90 Danish companies. Because of the fundamental difference between master data and transaction data, the questionnaire is limited to focusing only on master data.FindingsThe results of the survey indicate that a lack of delegation of responsibilities for maintaining master data is the single aspect which has the largest impact on master data quality. Also, the survey shows that the vast majority of the companies believe that poor master data quality does have significant negative effects.Research limitations/implicationsThe contributions of this paper represent a step towards an improved understanding of how to increase the level of master data quality in companies. This knowledge may have a positive impact on the data quality in companies. However, since the study presented in this paper appears to be the first of its kind, the conclusions drawn need further investigation by other research studies in the future.Practical implicationsThis paper identifies the main barriers for ensuring high master data quality and investigates which of these factors are the most important. By focusing on these barriers, companies will have better chances of increasing their data quality.Originality/valueThe study presented in this paper appears to be the first of its kind, and it represents an important step towards understanding better why companies find it difficult to achieve satisfactory data quality levels.