Of Duels, Trials, and Simplifying Systems
In: European Journal of Risk Regulation, Forthcoming
32049 Ergebnisse
Sortierung:
In: European Journal of Risk Regulation, Forthcoming
SSRN
Working paper
SSRN
The European approach to artificial intelligence (AI) points to ethical considerations, human control and trustworthiness as its core tenets. But how clearly is this approach reflected in the Member States' strategies?This anthology analyses to what extent the notions of ethical and trustworthy AI, presented by the High-Level Expert Group on Artificial Intelligence and the European Commission, have influenced AI strategies in Portugal, The Netherlands, Italy, the Czech Republic, Poland, Norway as well as the Nordics overall.It is clear that the EU-level policies have had an impact on the national level strategies, although sometimes only to the extent that they were published before the national documents. For instance, while some countries, such as Norway and Portugal, have explicitly incorporated aspects from the Ethics Guidelines, others, such as the Nordics, already tended to include questions of trust and transparency, or on ethics as in the case of Poland.The EU has emphasised AI trustworthiness as both an ethical imperative and competitive advantage. However, implementation is still at the starting line: much depends on alignment between this diverse group of nations, with different priorities, within the single market.
BASE
In: AI and ethics, Band 3, Heft 3, S. 735-744
ISSN: 2730-5961
AbstractDue to the extensive progress of research in artificial intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates; despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far. Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. This paper is intended as a step in this direction.
In: TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis / Journal for Technology Assessment in Theory and Practice, Band 30, Heft 3, S. 17-22
Während es inzwischen eine ganze Reihe praktischer Leitfäden für die Implementierung des Konzepts der vertrauenswürdigen künstlichen Intelligenz (KI) gibt, fehlt es an konkreten Beispielen und Projekten für Umsetzungen, anhand derer sich Probleme und Erfolgsstrategien der Akteur*innen vor Ort aufzeigen ließen. Dieser Beitrag stellt ausgewählte Umsetzungsprojekte vor. Durchweg zeigt sich dabei ein noch geringer Grad an Konkretisierung. Deshalb wird anschließend nach den Gründen für das Umsetzungsdefizit gefragt. Drei Erklärungen kommen infrage: Time-to-Market-Überlegungen aufseiten der Unternehmen, Unklarheit darüber, welche Aspekte des Konzepts der vertrauenswürdigen KI bei welchen Anwendungen überhaupt relevant sind sowie die Tatsache, dass die Umsetzung von KI‑Projekten komplexer ist als die Umsetzung 'normaler' Software-Projekte und deshalb spezifische Vorkehrungen notwendig sind.
The full benefit from using AI to generate value for businesses, societal wellbeing and the environment is still to be fully realised. To lower adoption barriers of Industrial AI, challenges on multiple levels (technical complexity, trustworthiness, industrialisation, data frameworks and infrastructures, etc.) need to be addressed to benefit from its full socio economic potential for economy, society and welfare. It is now time to foster the development of "industrial and trustworthy AI" and nurture European innovation and sovereignty ambitions and benefit society and leading European industries. In this position paper, a comprehensive industrial and trustworthy AI framework that clusters the priority area for AI research, innovation and deployment is introduced. It covers tools and methodologies that support the design, test, validation, verification, and maintainability of AI based functions and systems and addresses the development of AI based process and systems to demonstrate its integration into new products and services. Conformity assessment schemes, balancing innovation, business and European perspectives, are considered to connect risk management, functional andtrustworthiness requirements to industrial processes. In addition adequate standards supporting industrial AI and trustworthiness will play a central role. Implementing the industrial and trustworthy AI framework will require resources beyond the means of any European private stakeholders. Therefore, strong support from ecosystems, governments, and Europe might not be an option but necessary.
BASE
The full benefit from using AI to generate value for businesses, societal wellbeing and the environment is still to be fully realised. To lower adoption barriers of Industrial AI, challenges on multiple levels (technical complexity, trustworthiness, industrialisation, data frameworks and infrastructures, etc.) need to be addressed to benefit from its full socio economic potential for economy, society and welfare. It is now time to foster the development of "industrial and trustworthy AI" and nurture European innovation and sovereignty ambitions and benefit society and leading European industries. In this position paper, a comprehensive industrial and trustworthy AI framework that clusters the priority area for AI research, innovation and deployment is introduced. It covers tools and methodologies that support the design, test, validation, verification, and maintainability of AI based functions and systems and addresses the development of AI based process and systems to demonstrate its integration into new products and services. Conformity assessment schemes, balancing innovation, business and European perspectives, are considered to connect risk management, functional andtrustworthiness requirements to industrial processes. In addition adequate standards supporting industrial AI and trustworthiness will play a central role. Implementing the industrial and trustworthy AI framework will require resources beyond the means of any European private stakeholders. Therefore, strong support from ecosystems, governments, and Europe might not be an option but necessary.
BASE
The full benefit from using AI to generate value for businesses, societal wellbeing and the environment is still to be fully realised. To lower adoption barriers of Industrial AI, challenges on multiple levels (technical complexity, trustworthiness, industrialisation, data frameworks and infrastructures, etc.) need to be addressed to benefit from its full socio economic potential for economy, society and welfare. It is now time to foster the development of "industrial and trustworthy AI" and nurture European innovation and sovereignty ambitions and benefit society and leading European industries. In this position paper, a comprehensive industrial and trustworthy AI framework that clusters the priority area for AI research, innovation and deployment is introduced. It covers tools and methodologies that support the design, test, validation, verification, and maintainability of AI based functions and systems and addresses the development of AI based process and systems to demonstrate its integration into new products and services. Conformity assessment schemes, balancing innovation, business and European perspectives, are considered to connect risk management, functional andtrustworthiness requirements to industrial processes. In addition adequate standards supporting industrial AI and trustworthiness will play a central role. Implementing the industrial and trustworthy AI framework will require resources beyond the means of any European private stakeholders. Therefore, strong support from ecosystems, governments, and Europe might not be an option but necessary.
BASE
Blog: Centre for Data Ethics and Innovation Blog
Today, we are pleased to announce the launch of DSIT's Portfolio of AI Assurance Techniques. The portfolio features a range of case studies illustrating various AI assurance techniques being used in the real-world to support the development of trustworthy AI. …
SSRN
In: How awesome can you be?
"How trustworthy can you be? Take on the challenge to be the most awesome you, you can be. Approachable text filled with examples from the child's world paired with engaging photos makes important SEL learning fun. Plus, a bonus activity at the end lets young readers practice their new skills"--
In: AI and ethics, Band 4, Heft 1, S. 157-161
ISSN: 2730-5961
SWP
The European Union (EU) recently proposed in April 2021 an Artificial Intelligence legal framework. The EU highlights the advantages of using Artificial Intelligence (AI) specially in the areas of prediction, optimizing operations, resource allocation and personalizing service delivery. Nevertheless, the EU considers that the AI might bring new risks and negative consequences for individuals and society when an AI system violates the EU values and fundamental rights. This paper explains how the EU is proposing a legal framework for trustworthy AI. Tech or nontech companies and professionals should be aware about what kind of AI applications and practices might be prohibited or restricted within the EU. To be more specific, this paper focuses on what type or AI system can be considered as unacceptable risk, high or low risk AI. ; La Unión Europea (UE) ha propuesto recientemente en Abril del 2021 un marco jurídico para la Inteligencia Artificial. La UE destaca los beneficios de usar Inteligencia Artificial (IA) especialmente en áreas de predicción, optimización de operaciones, asignación de recursos y servicios personalizados de entrega. Sin embargo, la UE también considera que la Inteligencia Artificial puede conllevar nuevos riesgos y consecuencias negativas a los individuos y a la sociedad cuando por ejemplo, la IA viola los valores y derechos fundamentales de la UE. El propósito de este coloquio es explicar cómo la UE está proponiendo un marco jurídico para una IA confiable. Las compañías o profesionales en tecnología deben de tener conocimiento sobre qué clase de aplicaciones y prácticas de IA puedan estar prohibidas o restringidas en la Unión Europea. Este coloquio le proporcionará información acerca de cuándo una IA puede ser considerada un riesgo inaceptable, b) un riesgo alto, c) aplicación o herramienta de bajo o mínimo riesgo.
BASE