Trustworthy AI for semiconductor manufacturing Marcel van Gerven, a Professor of Artificial Cognitive Systems at Radboud University, discusses how his team's research on AI models, algorithms, and demonstrators could serve as a blueprint for the semiconductor manufacturing industry. Artificial Intelligence (AI) is a key enabling technology that, when used responsibly, can bring about significant positive societal impact. One of the initiatives to foster this positive impact is the Dutch ROBUST 'Trustworthy AI-based Systems for Sustainable Growth' program. This ten-year Long Term Program (LTP) is a human- centered research program in AI that brings together knowledge institutes, industry, governmental organizations, and societal partners to develop AI-based methods and tools designed to create social impact and promote sustainable growth (Figure 1).
Worldwide, there are a multiplicity of parallel activities being undertaken in developing international standards, regulations and individual organisational policies related to AI and its trustworthiness characteristics. The current lack of mappings between these activities presents the danger of a highly fragmented global landscape emerging in AI trustworthiness. This could present society, government and industry with competing standards, regulations and organisational practices that will then serve to undermine rather than build trust in AI. This chapter presents a simple ontology that can be used for checking the consistency and overlap of concepts from different standards, regulations and policies. The concepts in this ontology are grounded in an overview of AI standardisation currently being undertaken in ISO/IEC JTC 1/SC 42 and identifies its project to define an AI management system standard (AIMS or ISO/IEC WD 42001) as the starting point for establishing conceptual mapping between different initiatives. We propose a minimal, high level ontology for the support of conceptual mapping between different documents and show in the first instance how this can help map out the overlaps and gaps between and among SC 42 standards currently under development. ; This work was conducted by the ADAPT Centre with support of SFI, by the European Union's Horizon 2020 programme under the Marie Skłodowska-Curie Gran Grant Agreement No. 813497 and by the Irish Research Council Government of Ireland Postdoctoral FellowshipGrant GOIPD/2020/790. The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant # 13/RC/2106_P2.
Abstract As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label "Trustworthy AI" (TAI), a promise is formulated, according to which AI can meet criteria of transparency, legality, privacy, non-discrimination, and reliability. In this article, we ask what significance and scope the politically initiated concepts of TAI occupy in the current process of AI dynamics and to what extent they can stand for an independent, unique European or German development path of this technology.
The impact of AI, and in particular of deep learning, on the industry has been so disrupting that it gave rise to a new wave of research and applications that goes under the name of Industry 4.0. This expression refers to the application of AI and cognitive computing to leverage an effective data exchange and processing in manufacturing technologies, services and transports, laying the foundation of what is commonly known as the fourth industrial revolution. As a consequence, today's developing trend is increasingly focusing on AI based data-driven approaches, mainly because leveraging user's data (such as location, action patterns, social information, etc.) can make applications able to adapt to them, enhancing the user experience. To this aim, tools like automatic image tagging (e.g. those based on face recognition), voice control, personalised advertising, etc. process enormous amounts of data (often remotely due to the huge computational effort required) too often rich in sensitive information. Artificial intelligence has thus been proving to be so effective that today it is increasingly been using also in critical domains such as facial recognition, biometric verification (e.g. fingerprints), autonomous driving etc. Although this opens unprecedented scenarios, it is important to note that its misuse (malicious or not) can lead to unintended consequences, such as unethical or unfair use (e.g. discriminating on the basis of ethnicity or gender), or used to harm people's privacy. Indeed, if on one hand, the industry is pushing toward a massive use of artificial intelligence enhanced solution, on the other it is not adequately supporting researches in end-to-end understating of capabilities and vulnerabilities of such systems. The results may be very (negatively) mediatic, especially when regarding borderline domains such those related to subjects privacy or to ethical and fairness, like users profiling, fake news generation, reliability of autonomous driving systems, etc. We strongly believe that, since being just a (very powerful) tool, AI is not to blame for its misuse. Nonetheless, we claim that in order to develop a more ethical, fair and secure use of artificial intelligence, all the involved actors (in primis users, developers and legislators) must have a very clear idea about some critical questions, such as "what is AI?", "what are the ethical implications of its improper usage?", "what are its capabilities and limits?", "is it safe to use AI in critical domains?", and so on. Moreover, since AI is very likely to be an important part of our everyday life in the very next future, it is crucial to build trustworthy AI systems. Therefore, the aim of this thesis is to make a first step towards the crucial need for raising awareness about reproducibility, security and fairness threats associated with AI systems, from a technical perspective as well as from the governance and from the ethical point of view. Among the several issues that should be faced, in this work we try to address three central points: understanding what "intelligence" means and implies within the context of artificial intelligence; analyse the limitations and the weaknesses that might affect an AI-based system, independently from the particular adopted technology or technical solutions; assessing the system behaviours in the case of successful attacks and/or in the presence of degraded environmental conditions. To this aim, the thesis is divided into three main parts: in the first part we introduce the concept of AI, focusing on Deep Learning and on some of its more crucial issues, before moving to ethical implications associated with the notion of "intelligence"; in the second part we focus on the perils associated with the reproducibility of results in deep learning, also showing how proper network design can be used to limit their effects; finally, in the third part we address the implications that an AI misuse can cause in a critical domain such as biometrics, proposing some attacks duly designed for the scope. The cornerstone of the whole thesis are adversarial perturbations, a term referring to the set of techniques intended to deceive AI systems by injecting a small perturbation (noise, often totally imperceptible to a human being) into the data. The key idea is that, although adversarial perturbations are a considerable concern to domain experts, on the other hand, they fuel new possibilities to both favours a fair use of artificial intelligence systems and to better understand the "reasoning" they follow in order to reach the solution of a given problem. Results are presented for applications related to critical domains such as medical imaging, facial recognition and biometric verification. However, the concepts and the methodologies introduced in this thesis are intended to be general enough to be applied to different real-life applications.
This report is a methodological reflection on Z-Inspection. Z-Inspection is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Today, DSIT's Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations …
AbstractAI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement' section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a 'stop and think' section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message "if you want the money, then build trustworthy AI!".
This book is an early warning to public officials, policymakers, and procurement practitioners on the impact of AI on the public sector. Many governments have established national AI strategies and set ambitious goals to incorporate AI into the public infrastructure, while lacking AI-specific procurement guidelines. AI is not traditional software, and traditional processes are not sufficient to meet the challenges AI brings. Today's decisions to embed AI and algorithmic systems into public system infrastructure can - and will - have serious repercussions in the future. The promise of AI systems is to make the public sector more efficient, effective, fair, and sustainable. However, AI systems also bring new and emerging risks which can impact rights and freedoms. Therefore, guardrails are necessary to consider the socio-technical dimensions and impact on individuals, communities, and society at large. It is crucial that public sector decision-makers understand the emerging risks of AI systems, the impact on the agency and the wider public infrastructure, and have the means to independently validate vendor claims. This book is a result of interviews with more than 20 public procurement professionals across countries, offering an in-depth analysis of the risks, incidents, governance practices, and emerging good practices around the world, and provides valuable procurement policy and process recommendations to address and mitigate these risks