A significant set of epistemic and political transformations are taking place as states and societies begin to understand themselves and their problems through the paradigm of deep neural network algorithms. A machine learning political order does not merely change the political technologies of governance, but is itself a reordering of politics, of what the political can be. When algorithmic systems reduce the pluridimensionality of politics to the output of a model, they simultaneously foreclose the potential for other political claims to be made and alternative political projects to be built. More than this foreclosure, a machine learning political order actively profits and learns from the fracturing of communities and the destabilising of democratic rights. The transformation from rules-based algorithms to deep learning models has paralleled the undoing of rules-based social and international orders – from the use of machine learning in the campaigns of the UK EU referendum, to the trialling of algorithmic immigration and welfare systems, and the use of deep learning in the COVID-19 pandemic – with political problems becoming reconfigured as machine learning problems. Machine learning political orders decouple their attributes, features and clusters from underlying social values, no longer tethered to notions of good governance or a good society, but searching instead for the optimal function of abstract representations of data.
Introduction -- Secure Cooperative Learning in Early Years -- Outsourced Computation for Learning -- Secure Distributed Learning -- Learning with Differential Privacy -- Applications - Privacy-Preserving Image Processing -- Threats in Open Environment -- Conclusion.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
In: In B. Brożek, O. Kanevskaia, & P. Pałka, P. (Eds.), Research Handbook on Law and Technology (pp. 450–467). Edward Elgar. https://doi.org/10.4337/9781803921327.00037
Machine learning is a field at the intersection of statistics and computer science that uses algorithms to extract information and knowledge from data. Its applications increasingly find their way into economics, political science, and sociology. We offer a brief introduction to this vast toolbox and illustrate its current uses in the social sciences, including distilling measures from new data sources, such as text and images; characterizing population heterogeneity; improving causal inference; and offering predictions to aid policy decisions and theory development. We argue that, in addition to serving similar purposes in sociology, machine learning tools can speak to long-standing questions on the limitations of the linear modeling framework, the criteria for evaluating empirical findings, transparency around the context of discovery, and the epistemological core of the discipline.
Recent revelations concerning data firm Cambridge Analytica's illegitimate use of the data of millions of Facebook users highlights the ethical and, relatedly, legal issues arising from the use of machine learning techniques. Cambridge Analytica is, or was – the revelations brought about its demise - a firm that used machine learning processes to try to influence elections in the US and elsewhere by, for instance, targeting 'vulnerable' voters in marginal seats with political advertising. Of course, there is nothing new about political candidates and parties employing firms to engage in political advertising on their behalf, but if a data firm has access to the personal information of millions of voters, and is skilled in the use of machine learning techniques, then it can develop detailed, fine-grained voter profiles that enable political actors to reach a whole new level of manipulative influence over voters. My focus in this paper is not with the highly publicised ethical and legal issues arising from Cambridge Analytic's activities but rather with some important ethical issues arising from the use of machine learning techniques that have not received the attention and analysis that they deserve. I focus on three areas in which machine learning techniques are used or, it is claimed, should be used, and which give rise to problems at the interface of law and ethics (or law and morality, I use the terms "ethics" and "morality" interchangeably). The three areas are profiling and predictive policing (Saunders et al. 2016), legal adjudication (Zeleznikow, 2017), and machines' compliance with legally enshrined moral principles (Arkin 2010). I note that here, as elsewhere, new and emerging technologies are developing rapidly making it difficult to predict what might or might not be able to be achieved in the future. For this reason, I have adopted the conservative stance of restricting my ethical analysis to existing machine learning techniques and applications rather than those that are the object of speculation or even informed extrapolation (Mittelstadt et al. 2015). This has the consequence that what I might regard as a limitation of machine learning techniques, e.g. in respect of predicting novel outcomes or of accommodating moral principles, might be thought by others to be merely a limitation of currently available techniques. After all, has not the history of AI recently shown the naysayers to have been proved wrong? Certainly, AI has seen some impressive results, including the construction of computers that can defeat human experts in complex games, such as chess and Go (Silver et al. 2017), and others that can do a better job than human medical experts at identifying the malignancy of moles and the like (Esteva et al. 2017). However, since by definition future machine learning techniques and applications are not yet with us the general claim that current limitations will be overcome cannot at this time be confirmed or disconfirmed on the basis of empirical evidence.
Today, open data platforms host a wide and heterogeneous catalog of datasets. However, these datasets are often neglected in Machine Learning (ML) and other related tasks. This mainly happens because there are few available open data catalogs specialized in ML applications and because it is often unclear whether Machine Learning algorithms would be adequate and well performing on such datasets. Therefore, several open datasets go unused while they could be leveraged by the ML community to explain, evaluate, and challenge existing methods on real open data. For instance, these real-world data could be used by professors teaching ML courses, by students taking these courses, by researchers testing current and novel ML approaches, and possibly to promote the intersection of open data, ML and public policy. In this talk we will show you how we are tackling this issue working on datasets from data.gouv.fr (DGF), the French open data government platform. We aim to answer the question of what makes a dataset suitable and well performing for Machine Learning tasks by leveraging open source tools. Our goal is to establish a first small empirical assessment of the characteristics of a dataset (size, balance of its categorical variables and so on) that make it a "good fit" for Machine Learning algorithms. Specifically, we first manually select an adequate subset of datasets from DGF. Then we perform a statistic profiling on each of these datasets. Thirdly, we automatically train and validate a set of ML algorithms on them and we cluster the datasets according to their evaluation results. These steps help us to better understand the nature of each dataset and thus determine which ones seem suitable for ML applications. Based on these datasets, and inspired by existing resources, we build the first version of a catalog of open datasets for ML. We hope that this platform will be a first stepping stone towards the reuse of open datasets in Machine Learning contexts.
Machine learning (ML) is changing virtually every aspect of our lives. Today ML algorithms accomplish tasks that until recently only expert humans could perform. As it relates to finance, this is the most exciting time to adopt a disruptive technology that will transform how everyone invests for generations. Readers will learn how to structure Big data in a way that is amenable to ML algorithms; how to conduct research with ML algorithms on that data; how to use supercomputing methods; how to backtest your discoveries while avoiding false positives. The book addresses real-life problems faced by practitioners on a daily basis, and explains scientifically sound solutions using math, supported by code and examples. Readers become active users who can test the proposed solutions in their particular setting. Written by a recognized expert and portfolio manager, this book will equip investment professionals with the groundbreaking tools needed to succeed in modern finance.