Artificial intelligence : what it is and why it matters -- The practice of law - what do lawyers do? -- AI and outcome prediction -- AI, pre-trial information gathering (discovery and disclosure) and litigation lawyers -- AI, online courts, and alternative dispute resolution -- AI and transactional lawyers -- AI and regulatory lawyers -- AI and criminal lawyers -- Limitations of AI -- Legal ethics, liability, and regulation in an AI world -- Future of the legal profession.
In: in Gabrielle Appleby and Andrew Lynch (eds), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia (Cambridge University Press, 2021)
This article focuses on individual lawyers' responsible use of artificial intelligence (AI) in their practice. More specifically, it examines the ways in which a lawyer's ethical capabilities and motivations are tested by the rapid growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover what is required of lawyers when they use this technology. To do so, we use psychologist James Rest's Four-component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in professional conduct when utilising AI. We examine issues associated with automation that most seriously challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties. Importantly, this approach is grounded in social psychology. That is, by looking at human 'thinking and doing' (i.e., lawyers' motivations and capacity when using AI), this offers a different, complementary perspective to the typical, legislative approach in which the law is analysed for regulatory gaps.
In: Felicity Bell, Justine Rogers and Michael Legg, 'Lawyer Wellbeing in the (Robotic) Face of Technological Change' in Judith Marychurch and Adiva Sifris (eds) Wellness for Law: Making Wellness Core Business (LexisNexis, 2019)