"In making decisions-be they decisions for ourselves, our families, our work, or our government-our thinking is informed by a host of factors that include the information we have on hand, the societal norms exerting pressure in one direction or another, the laws that govern us, and, increasingly, the technology that can bring the power of algorithms, AI, and computing to our aid. Viktor Mayer-Schönberger and Urs Gasser term this overarching set of external influences "guardrails": the structures, much like the same-named barriers on highways, that establish the bounds and direction of desirable behavior. As technology has come to play an outsized role in shaping our decision-making, the authors argue that a clear understanding of what role guardrails can and should play in our society is essential-and that this in turn can help us determine what kind of transparency and accountability we require of the technology we rely on. The authors first consider some of the challenges of decision-making in the digital world in chapters that focus on information and misinformation, human bias and the promise (or not) of AI to correct it, and decision-making in the face of uncertainty. In each case, they show how the quick embrace of technological solutions can lead to results we don't expect or hope for (for instance, the perpetuation of racial discrimination in the algorithmic assessment of credit-worthiness). They then lay out what they see as the key principles for good guardrails-empowering individual decisions, accounting for the social good, and flexibility in the face of new circumstances. Ultimately, the authors present a vision for the future of decision-making that centers individual choice and human volition even in face of technological progress"--
Funded by ADR UK, a new data linking team at the Ministry of Justice set out to link administrative datasets across the justice space, for internal use and sharing with external researchers. To achieve this aim we sought a linkage implementation that was probabilistic, flexible, scalable and ideally open source.
Taking into account the tools available at the MoJ, existing open-source software (and paid alternatives) failed to meet our desired criteria. It was decided to develop a software package that builds on FastLink's implementation in R of an Expectation-Maximisation algorithm to estimate a Fellegi-Sunter linkage model, adding a range of technical improvements, increased functionality and customisation options. Distributed computing offered by Spark could facilitate comparable linkage jobs that run on much larger datasets and much faster. Working with government data, accountability and transparency are vital, so the data and models are made accessible by a range of intuitive visualizations.
The Splink python package has been downloaded over 6 million times. This initially used Spark to deliver its superior performance, but Splink v3 caters for various SQL backends and more potential users. As we have made Splink more intuitive, more accessible, and more extensively documented we continue to receive feedback and contributions from around the world, driving further continuous development.
Through technical innovation and user-focused development, Splink has improved access to cutting-edge data linkage, and created groundbreaking research opportunities at MoJ and beyond. The team is grateful to ONS and other collaborators for testing and adopting these tools, and we will present some of the latest developments as well as examples of how Splink has been used worldwide.
Kudus is a district in the province of Central Java. The capital of this district is the City of Kudus, located on the northeast coastline of Central Java between the City of Semarang and the City of Surabaya. The city is about 51 kilometers from the east side of the city of Semarang, Kudus gets a large village fund allocation, the allocation of funds transferred to the village government to support village development in 2017 reached Rp 219.89 billion. The urgency of this study is based on the problems that arise in its use based on the survey including Error mechanism, Not according to plan or unclear allocation, Not according to Guidelines, Guidelines, technical guidelines (especially procurement of goods and services), Administration of financial statements: (Markup and markdown, double counting), Reduction of Village Fund allocations, for example, village funds are used as assets of village heads and tools for personal gain, cannot be held accountable for their use, asset misappropriation: Sale or swap of Kas Desa Village, Rental of Village Cash Land which is not his right. All these problems can be anticipated by presenting good and transparent information with the information system for using village funds (SIMDANDES), the approach method used is a waterfall, for this reason, a Transparency Information System for village fund use (SIMDANDES) is needed for village fund management Gondangmanis Kec. Kab. Kab. Kab. With the Algorithm Information Retrieval System, System (SIMDANDES) method was applied to monitor the use of village funds and their absorption carried out in each village in real time, as well as information for village heads and stakeholders in Kudus District to establish a policy.Keywords: information system, village fund, algorithma information retrieval system
The aim of this work is to analyse the most relevant issues related to the topic of the automation of administrative decisions. This phenomenon is, indeed, the result of the process of technological development that has recently begun to affect the traditional way of understanding the organization and the action of the public authorities. In particular, thanks to the spread of increasingly sophisticated computer systems, nowadays public administrations have the opportunity to adopt administrative acts using appropriate algorithms. In fact, although heterogeneous in structure and functions, these tools allow public action to achieve levels of efficiency and speed difficult to achieve through the ordinary conduct of administrative procedures. In the awareness of the significant benefits of this technological change, it is necessary to verify whether and how the use of computer programs by public entities can be considered compatible with the traditional procedural guarantees of administrative law. With the intent to answer this question, this study aims to analyse the phenomenon of automation of decision-making process into four main parts. The first part is dedicated to define the historical and legal framework of the process of digitalization of the public sector. Specifically, after having pointed out the most significant legislative measures adopted in this field in recent years, the work analyzes the main issues related to the implementation of the e-Government model within our legal system. In the light of brief terminological clarifications, the theme of automation is introduced and some interesting cases in which computer systems were used within the public procedures are examined. The second part of the analysis focuses on the issue of the admissibility and of the field of application of algorithmic decisions. Indeed, this topic has already been at the heart of the reflections of the most ancient doctrine, which stated that automation process was allowed only if the exercised administrative power can be considered constrained. However, this approach was overtaken by the more recent literature, which highlighted the opportunity of extending, despite some limitations, the use of automated systems where the public power is discretionary in nature. These are interpretative guidelines which seem to have also influenced the judgements of the administrative jurisprudence, which over time showed a more open attitude towards the use of algorithms in administrative procedures. The third part of the work focuses on the possibility of reconciling the main procedural guarantees provided by the Italian law on administrative procedure with the structural and functional peculiarities that characterize the phenomenon of automation. This analysis is carried out, in particular, by examining whether and in what terms the use of algorithms can be considered compatible with three fundamental principles related to the action of the public authorities: the principle of transparency of administrative decisions; the principle of motivation of administrative acts; the principle of private participation within administrative proceedings. Each topic is analyzed by examining both the evolution of the doctrinal debate and the interpretative solutions proposed in the case law, where for the first time the legal conditions that must guide any attempt to adopt administrative acts using computer software were defined. Finally, the last part of the study deals with the role and the responsibility of the public administrations in the automated decision-making process. Within this session, in particular, it is analyzed how the most relevant doctrinal and jurisprudential positions managed to state the direct accountability of public bodies for the effects deriving from the adoption of automated administrative acts. In the light of the framework carried out, some final considerations are formulated regarding the current needs to regulate the phenomenon within the public sector.
"Artificial intelligence (AI) is beginning to appear in everything from writing, social media, and business to wartime or intelligence strategy. With so many applications in our everyday lives and in the systems that run them, many are demanding that ethical implications are considered before any one application of AI goes too far and causes irreparable damage to the personal data or operations of individuals, governments, and organizations. For instance, AI that is fed data sets that are influenced by human data collection method biases may be perpetuating societal biases with implicit bias that can create serious consequences. Applications of AI with implicit bias on recidivism prediction models as well as medical algorithms have shown biases against certain racial or ethnic groups, leading to actual discrimination in treatment by the legal system and the medical systems.Regulatory groups may identify the bias in AI but not the source of the bias, making it difficult to determine who to hold accountable. Lack of dataset and programming transparency can be problematic when AI systems are used to make significant decisions, and as AI systems become more advanced, questions arise regarding responsibility for the results of their implementation and the regulation thereof. Research on how these applications of AI are affecting interpersonal and societal relationships is important for informing much-needed regulatory policies.Investigating the Impact of AI on Ethics and Spirituality focuses on the spiritual implications of AI and its increasing presence in society. As AI technology advances, it raises fundamental questions about our spiritual relationship with technology. This study emphasizes the need to examine the ethical considerations of AI through a spiritual lens and to consider how spiritual principles can inform its development and use. This book covers topics such as data collection, ethical issues, and AI and is ideal for educators, teacher trainees, policymakers, academicians, researchers, curriculum developers, higher-level students, social activists, and government officials."--
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
""The next generation of systems and practices in journalism will require knowledge beyond online editing techniques, aggregation, social media flow and assumptions about fake news. The profession may also want to aim for ethical practices in journalism to be embedded in algorithms for new systems. Engagement in an early design phase may also be useful for scoping reforms for online and social media legislation. However, these pursuits require higher levels of understanding about backend data and online systems, and development of formal vocabulary for journalism concepts and practices. This new domain knowledge should also be expressed in ontological models, informed by participatory approaches. Some problems to be addressed include editorial control issues and fair distribution of news stories and other challenges of data and online systems. Problematic issues should also include the lack of transparency in corporate data sharing arrangements. The semantic language for future systems for journalism will be distinctly different from the vocabulary and classifications used for online news tags. It will also need to distinguish the vocabulary for social media things in context of journalism. Most importantly, the design of new systems will need participatory and semantic design methods that can support the need for high-level knowledge of data and semantic search methods. The influence of social media partnerships in news and backend data sharing are other problem areas. Data via integrated media systems in news organisations flows onto cloud servers where it is processed with a myriad of methods. These hubs are for the new generation of data sharing, where large volumes of data are sorted and processed at accelerated speeds, for a range of purposes. Cloud servers are now literally the highest levels of digital convergence, other than legislation, and the latter is lagging. This is where data is shared for advertising, social media benefits and other domain purposes. Integrated media systems bring benefits for global networked news media organisations, but they also enable more monetisation of data via cloud servers. ""--
Carbon is fundamental for life on Earth and for humans. The last century has witnessed an increase in concentration of carbon dioxide in the atmosphere due to anthropogenic activities. This has generated abrupt and irreversible changes in climate with possible consequences on production, consumption and livelihood of humans. Charging a reliable price to carbon might represent one way to reach an ideal level of carbon emissions leading to sustainable consumption, production and investment patterns. According to some studies, the marketplace would be the ideal solution at policy level to provide a price to carbon. On the other hand, an appropriate price to carbon is at the backbone to undertake a full transition towards sustainability of the financial sector, among others. Sustainable finance entails tackling climate-related risk along with reliable information and (carbon) price signals for investors. The following work will be dealing with some aspects related to the role of carbon and financial markets in the transition towards sustainability. The first chapter will adopt a methodology including lasso-based optimization in time series econometrics to analyse EU ETS price behavior with respect to a wide set of variables including CO2. The second chapter will be analysing profiles of EU countries in terms of environmental performance, energy efficiency and renewable sources via cluster algorithm. The third chapter is an exploratory work on the role of financial innovation tackling specific classes of risk of sustainability-oriented projects including climate-related risk. Considering all the limitations of the methodology, results highlighted how carbon and financial markets need to function under the same pillars (e.g., commitment, transparency, price signals). The overall aim of the dissertation will be to investigate on the role so far of markets towards the sustainability transition considering its structural limitations (e.g., equity). ; Il carbonio è un elemento fondamentale per la vita sulla Terra e gli ...
There is no doubt that we live in exciting times: Ours is the age of many 'silent revolutions' triggered by startups and research labs of big IT companies; revolutions that quietly and profoundly alter the world we live in. Another ten or five years, and self-tracking will be as normal and inevitable as having a Facebook account or a mobile phone. Our bodies, hooked to wearable devices sitting directly at or beneath the skin, will constantly transmit data to the big aggregation in the cloud. Permanent recording and automatic sharing will provide unabridged memory, both shareable and analyzable. The digitization of everything will allow for comprehensive quantification; predictive analytics and algorithmic regulation will prove themselves effective and indispensable ways to govern modern mass society. Given such prospects, it is neither too early to speculate on the possible futures of digital media nor too soon to remember how we expected it to develop ten, or twenty years ago. The observations shared in this book take the form of conversations about digital media and culture centered around four distinct thematic fields: politics and government, algorithm and censorship, art and aesthetics, as well as media literacy and education. Among the keywords discussed are: data mining, algorithmic regulation, sharing culture, filter bubble, distant reading, power browsing, deep attention, transparent reader, interactive art, participatory culture. The interviewees (mostly from the US, but also from France, Brazil, and Denmark) were given a set of common questions as well specific inquiries tailored to their individual areas of interest and expertise. As a result, the book both identifies different takes on the same issues and enables a diversity of perspectives when it comes to the interviewees' particular concerns.
Among the questions offered to everybody were: What is your favored neologism of digital media culture? If you could go back in history of new media and digital culture in order to prevent something from happening or somebody from doing something, what or who would it be? If you were a minister of education, what would you do about media literacy? What is the economic and political force of personalization and transparency in digital media and what is its personal and cultural cost? Other recurrent questions address the relationship between cyberspace and government, the Googlization, quantification and customization of everything, and the culture of sharing and transparency. The section on art and aesthetics evaluates the former hopes for hypertext and hyperfiction, the political facet of digital art, the transition from the "passive" to "active" and from "social" to "transparent reading"; the section on media literacy discusses the loss of deep reading, the prospect of "distant reading" and "algorithmic criticism" as well as the response of the university to the upheaval of new media and the expectations or misgivings towards the rise of the Digital Humanities.
There is no doubt that we live in exciting times: Ours is the age of many 'silent revolutions' triggered by startups and research labs of big IT companies; revolutions that quietly and profoundly alter the world we live in. Another ten or five years, and self-tracking will be as normal and inevitable as having a Facebook account or a mobile phone. Our bodies, hooked to wearable devices sitting directly at or beneath the skin, will constantly transmit data to the big aggregation in the cloud. Permanent recording and automatic sharing will provide unabridged memory, both shareable and analyzable. The digitization of everything will allow for comprehensive quantification; predictive analytics and algorithmic regulation will prove themselves effective and indispensable ways to govern modern mass society. Given such prospects, it is neither too early to speculate on the possible futures of digital media nor too soon to remember how we expected it to develop ten, or twenty years ago.The observations shared in this book take the form of conversations about digital media and culture centered around four distinct thematic fields: politics and government, algorithm and censorship, art and aesthetics, as well as media literacy and education. Among the keywords discussed are: data mining, algorithmic regulation, sharing culture, filter bubble, distant reading, power browsing, deep attention, transparent reader, interactive art, participatory culture. The interviewees (mostly from the US, but also from France, Brazil, and Denmark) were given a set of common questions as well specific inquiries tailored to their individual areas of interest and expertise. As a result, the book both identifies different takes on the same issues and enables a diversity of perspectives when it comes to the interviewees' particular concerns.Among the questions offered to everybody were: What is your favored neologism of digital media culture? If you could go back in history of new media and digital culture in order to prevent something from happening or somebody from doing something, what or who would it be? If you were a minister of education, what would you do about media literacy? What is the economic and political force of personalization and transparency in digital media and what is its personal and cultural cost? Other recurrent questions address the relationship between cyberspace and government, the Googlization, quantification and customization of everything, and the culture of sharing and transparency. The section on art and aesthetics evaluates the former hopes for hypertext and hyperfiction, the political facet of digital art, the transition from the "passive" to "active" and from "social" to "transparent reading"; the section on media literacy discusses the loss of deep reading, the prospect of "distant reading" and "algorithmic criticism" as well as the response of the university to the upheaval of new media and the expectations or misgivings towards the rise of the Digital Humanities.
There is no doubt that we live in exciting times: Ours is the age of many 'silent revolutions' triggered by startups and research labs of big IT companies; revolutions that quietly and profoundly alter the world we live in. Another ten or five years, and self-tracking will be as normal and inevitable as having a Facebook account or a mobile phone. Our bodies, hooked to wearable devices sitting directly at or beneath the skin, will constantly transmit data to the big aggregation in the cloud. Permanent recording and automatic sharing will provide unabridged memory, both shareable and analyzable. The digitization of everything will allow for comprehensive quantification; predictive analytics and algorithmic regulation will prove themselves effective and indispensable ways to govern modern mass society. Given such prospects, it is neither too early to speculate on the possible futures of digital media nor too soon to remember how we expected it to develop ten, or twenty years ago. The observations shared in this book take the form of conversations about digital media and culture centered around four distinct thematic fields: politics and government, algorithm and censorship, art and aesthetics, as well as media literacy and education. Among the keywords discussed are: data mining, algorithmic regulation, sharing culture, filter bubble, distant reading, power browsing, deep attention, transparent reader, interactive art, participatory culture. The interviewees (mostly from the US, but also from France, Brazil, and Denmark) were given a set of common questions as well specific inquiries tailored to their individual areas of interest and expertise. As a result, the book both identifies different takes on the same issues and enables a diversity of perspectives when it comes to the interviewees' particular concerns. Among the questions offered to everybody were: What is your favored neologism of digital media culture? If you could go back in history of new media and digital culture in order to prevent something from happening or somebody from doing something, what or who would it be? If you were a minister of education, what would you do about media literacy? What is the economic and political force of personalization and transparency in digital media and what is its personal and cultural cost? Other recurrent questions address the relationship between cyberspace and government, the Googlization, quantification and customization of everything, and the culture of sharing and transparency. The section on art and aesthetics evaluates the former hopes for hypertext and hyperfiction, the political facet of digital art, the transition from the "passive" to "active" and from "social" to "transparent reading"; the section on media literacy discusses the loss of deep reading, the prospect of "distant reading" and "algorithmic criticism" as well as the response of the university to the upheaval of new media and the expectations or misgivings towards the rise of the Digital Humanities.
Wie sollen Rechtssysteme auf Regeln reagieren, die Provider von Netzgemeinschaften wie Facebook oder World of Warcraft Nutzenden auferlegen? Das positive Recht gibt hierauf keine verlässliche Antwort. Erst ein Verständnis der Legitimität der Regeln ermöglicht ein Austarieren des Verhältnisses zwischen den Regelwerken von Netzgemeinschaften und Rechtssystemen. Nach Literaturstimmen sollen die Regeln durch außerrechtliche Mechanismen (z.B. direktdemokratische Verfahren), eine gerichtliche Kontrolle nach verfassungsrechtlichen Kriterien oder Zivilverfassungen legitimiert werden. Es ist aber zweifelhaft, ob Netzgemeinschaften legitime außerrechtliche Mechanismen schaffen können, ob sie wie Staaten behandelt werden sollten und ob Zivilverfassungen entstehen werden. Die Arbeit schlägt ein alternatives Modell vor: Im deutschen Zivilrecht zeichnet sich ein Legitimitätsmodell für private Regeln ab, das auf Regeln von Netzgemeinschaften anwendbar ist und als transnationale Schablone dienen kann. Danach werden die Regeln durch die Zustimmung und das Wohl der Nutzenden legitimiert. Letzteres gewährleistet ein Ausbeutungsschutz der Nutzenden in Form einer gerichtlichen Kontrolle. Die Anwendung des Modells führt zu folgenden Erkenntnissen: 1. Geschriebene Regeln sind schwach durch Zustimmung legitimiert. Eine gerichtliche Kontrolle nach vertragsrechtlichen Kriterien (bei Regelungen des Austauschverhältnisses zwischen Providern und Nutzenden) oder grundrechtlichen Kriterien (bei Verhaltensregeln) verleiht ihnen zusätzliche Legitimität. Die Kontrollintensität hängt von der Höhe des Ausbeutungsrisikos und der Existenz von legitimen außerrechtlichen Mechanismen ab. 2. Code-Regeln (z.B. Newsfeed- Algorithmen) sind auch nur schwach durch Zustimmung legitimiert. Gerichtliche Kontrollmöglichkeiten, die sie gegenüber Nutzenden legitimieren, müssen noch geschaffen werden. 3. Geschriebene und Code-Regeln sind illegitim gegenüber Nichtnutzenden, weil sie nicht auf deren Zustimmung beruhen. ; How should legal systems respond to rules that virtual community providers such as Facebook or World of Warcraft impose on users? To answer this question, we must look beyond black letter law. Only an understanding of the legitimacy of these rules allows us to balance out their relationship with legal systems. Current scholarship theorizes their legitimacy as follows: Non-legal mechanisms (e.g. direct voting systems), judicial review according to constitutional principles, or digital civil constitutions may legitimize the rules. Yet, three points remain doubtful: whether virtual communities can develop legitimate self-governance mechanisms, whether they should be treated like states, and whether digital civil constitutions will effectively emerge. This work proposes an alternative legitimacy model: German private law reflects a legitimacy model for private rule-making applicable to rules of virtual communities which can serve as a transnational template. This model suggests that the rules can derive legitimacy from two sources: user consent and the common good of users, the latter ensured by judicial review protecting users against exploitation. This leads to the following key findings: 1. Written rules of virtual communities are weakly legitimized by user consent but derive additional legitimacy from judicial review. Contract law standard applies to rules that govern the bilateral exchange relationship between providers and users. General rules of conduct for users are checked against fundamental rights. The required intensity of review depends on the risk of user exploitation and the presence of legitimate self-governance mechanisms. 2. Rules embedded in computer code (e.g. newsfeed algorithms) are poorly legitimized by user consent. Judicial review procedures legitimizing them towards users still need to be established. 3. Both written rules and rules embedded in computer code are not legitimate towards non-users since non-users have not consented to them.
This article presents the study's results of cultural policies' articulation with public communication based on the transformation of open data's government into plausible knowledge for the citizens interpretation. Based on design science research, this research presents, as a result, the creation of an artifact (the Elum software) that operationalizes public communication based on the information available at the Court of Accounts of the State of Rio Grande do Sul, Brazil, about the municipal public administration expenditure in the cultural sector. Such interest is associated with transparency, accountability and social control, established by legal provisions and translated into information access, transparency and social control portals. The data accessed by this research were treated (as indicators) and communicated (as public communication) to generate a cognitive equivalence between those who have them and those who have a potential interest in understanding them. Therefore, a flow of communicative relations for the public interest was prototyped. In addition, the study reveals a recontextualization of public communication regarding algorithms, interfaces and devices that reorder social phenomena connected to the exercise of observing public policies. In conclusion, the need for the communication area to articulate with other fields of knowledge is highlighted, which results in epistemological implications (the construction of communication knowledge concerning data, algorithms and interfaces) and in methodological implications (methodologies that can think and develop artifacts capable of generate solutions to problems which are established in social reality). ; Este artículo presenta los resultados del estudio de la articulación de las políticas culturales con la comunicación pública a partir de la transformación de los datos de gobierno abierto en un conocimiento plausible de interpretación para la ciudadanía. A partir de la investigación en ciencia del diseño, la investigación presenta, como resultado, la creación de un artefacto (el software Elum) que operacionaliza la comunicación pública a partir de la información disponible en el Tribunal de Cuentas del Estado de Rio Grande do Sul, Brasil, sobre administración. público sobre el gasto en el sector cultural. Dicho interés está asociado a la transparencia, la rendición de cuentas y el control social, establecido por las disposiciones legales y traducido en portales de acceso a la información, portales de transparencia y control social. Los datos a los que accedió la investigación fueron tratados (como indicadores) y comunicados (como comunicación pública) para generar una equivalencia cognitiva entre quienes los tienen y quienes tienen un potencial interés en comprenderlos. Así, se prototipa un flujo de relaciones comunicativas para el interés público. Además, el estudio revela la recontextualización de la comunicación pública frente a algoritmos, interfaces y dispositivos que reordenan los fenómenos sociales vinculados al ejercicio de la observación de las políticas públicas. En conclusión, se destaca la necesidad de que el área de la comunicación se articule con otros campos del conocimiento, teniendo implicaciones epistemológicas (la construcción del conocimiento comunicativo frente a los datos, algoritmos e interfaces) e implicaciones metodológicas (metodologías que pueden pensar y desarrollar artefactos capaces de generar soluciones a problemas establecidos en la realidad social). ; O presente artigo apresenta os resultados do estudo da articulação de políticas culturais com a comunicação pública a partir da transformação dos dados abertos de governo em conhecimento plausível de interpretação para os cidadãos. Baseado na design science research, a pesquisa apresenta, enquanto resultado, a criação de um artefato (o software Elum) que operacionaliza a comunicação pública a partir das informações disponíveis no Tribunal de Contas do Estado do Rio Grande do Sul, Brasil, sobre a administração pública municipal acerca das despesas no setor cultural. Tal interesse está associado com a transparência, a accountability e o controle social, estabelecidos por dispositivos legais e traduzidos em portais de acesso à informação, portais de transparência e controle social. Os dados acessados pela pesquisa foram tratados (enquanto indicadores) e comunicados (enquanto comunicação pública) para gerar uma equivalência cognitiva entre quem os dispõe e quem tem potencial interesse em compreendê-los. Assim, prototipou-se um fluxo de relações comunicativas para o interesse público. Além disso, o estudo revela a recontextualização da comunicação pública diante de algoritmos, interfaces e dispositivos que reordenam os fenômenos sociais ligados ao exercício de observação das políticas públicas. Em conclusão, destaca-se a necessidade de a área da comunicação articular-se com outros campos do saber, tendo implicações epistemológicas (a construção do saber da comunicação diante de dados, algoritmos e interfaces) e implicações metodológicas (metodologias que possam pensar e desenvolver artefatos capazes de gerar soluções para problemas estabelecidos na realidade social).
Concepts and classifications of media accountability. Theory and practice of media accountability in Europe: an introductory overview / Tobias Eberwein, Susanne Fengler & Matthias Karmasin -- European models of journalism regulation: a comparative classification / João Miranda & Carlos Camponez -- The circular impact model: conceptualizing media accountability / Caroline Lindekamp -- Political and societal challenges. Media accountability in the era of fake news: journalistic boundary work and its problems in Finland / Heikki Heikkilä & Jari Väliverronen -- Media accountability instruments concerning immigration and the polarisation of trust in journalism in Sweden / Torbjörn von Krogh & Göran Svensson -- Press repeat: media self-regulation in the United Kingdom after Leveson / Gordon Ramsay & Martin Moore -- Media accountability meets media polarisation: a case study from Poland / Michal Glowacki & Michal Kus -- Economic and organisational challenges. Selling short media accountability? the importance of addressing market-driven claims on media freedom / Andrew T. Kenyon, Eva-Maria Svensson & Maria Edström -- Public value and shared value through the delivery of accountability / Kaisa Sorsa -- Strengthening media accountability through regulated self-regulation: the Swiss model / Mirco Saner & Vinzenz Wyss -- Accountability and corporate social responsibility in the media industry: a topic of relevance? / Isabell Koinig, Sandra Diehl, Franzisca Weder & Matthias Karmasin -- Technological challenges. Involvement of private and civil society actors in media regulation processes: a comparison of all European Union member states / Dirk Arnold -- Emerging structures of control for algorithms on the internet: distributed agency, distributed accountability / Florian Saurwein -- Ensuring accountability and transparency in networked journalism: a critical analysis of collaborations between whistle-blowing platforms and investigative journalism / Colin Porlezza & Philip di Salvo -- Perspectives: rethinking the role of the audience. Complaints handling mechanisms and online accountability in Western European PSB / Dolors Palau-Sampio -- A wheelbarrow full of frogs: how media organisations in the Netherlands are dealing with online public complaints / Yael de Haan -- The battle over the living room: constructing an accountable popular culture / Efrat Daskal -- Examining media accountability in online media and the role of active audiences: the case of Spain / Jose A. García-Avilés -- Media criticism in an African journalistic culture: an inventory of media accountability practices in Kenya / David Cheruiyot.
In light of the Fourth Industrial Revolution, the intervention of artificial intelligence in commercial transactions has expanded, and it has not remained a mere subject or subject of the contract, whether it is a material or moral product, but has gone beyond that to have a fundamental and effective role in concluding the contract as an electronic agent that makes the contract automated and concluded in whole or in part in an automated manner without human intervention. The UAE legislator was interested in regulating its use, whether under the Electronic Transactions Law of 2006 and the current one of 2021, and referred to this possibility under the Trade through Modern Technology Law of 2023; It considers it an information program that represents the original principal and bears the effects of the transaction concluded with the intervention of artificial intelligence despite not granting the electronic agent legal personality. The integration of artificial intelligence and modern technology into commercial transactions has profoundly transformed the landscape of commercial law. Artificial intelligence techs, including natural language processing and machine learning algorithms, are increasingly utilized for contract formation, risk assessment, and dispute resolution. Such technologies enhance efficiency, decreases human error, and advance transactional processes, offering substantial advantages for businesses and consumers. Modern technology, consists of digital currencies, smart contracts, and blockchain, has further revolutionized commercial transactions through offering unprecedented levels of security, transparency, and automation. Blockchain technology warrants irreversible and provable records, while smart contracts perform automatically when predefined conditions are met. This minimizes the need for intermediaries and enhancing integrity of the transactions carried out. Additionally, this paper addresses the legal and regulatory responses to these technological advancements. Jurisdictions across the world are grappling with the need to update existing laws or come up with new ones in order to address the challenges posed by AI and modern tech in commerce. The dynamic nature of technology calls for a flexible and adaptive legal approach in order to make sure that commercial laws remains to be relevant and effective.