The annual academic forum draws together researchers from a breadth of disciplines to address significant legal issues in an international context. In 2019, topics ranged from the evolution of robotics and ‘smart robots’; the interaction of AI, humans and the environment; state responsibility; and AI and ethics.
The digital revolution – characterised by disruptive technologies and trends such as the Internet of Things (IoT), robotics, virtual reality (VR) and artificial intelligence (AI) – has fundamentally changed the way we live, work and relate, leaving little untouched.
The legal profession is no exception. According to a Deloitte report, automation, artificial intelligence, and analytics are among the technological drivers of change shaping firms of the future. Further reports estimate that up to 100,000 legal roles will be automated by 2036, suggesting firms are fast approaching the “tipping point” for a new talent strategy.
It’s been over half a century since Alan Turing – widely considered the father of theoretical computer science and artificial intelligence (AI) – posed the question, 'Can machines think?'
Today, as the lines between the physical, digital, and biological become further blurred, AI is increasingly being used to complete tasks that typically, in the past, required human intelligence. While there is little question of its potential use and effectiveness in expediting certain processes within the legal profession, there remain significant considerations.
We increasingly understand that autonomous and intelligent systems pose a significant challenge to our conception of what the law is and how it works.
"It also poses challenges to the rule of law and procedural justice; to human rights including rights against discrimination and rights to equal treatment; and to our traditional ways of thinking about the allocation of liability in both criminal and civil law,” she continued.
Professor Weatherall was among the University of Sydney presenters, along with Professor Simon Rice, Dr Carolyn McKay, and Dr Penelope Crossley, who highlighted some of the fundamental, pressing concerns at the cross-section of technology and the law.
Professor Kimberlee Weatherall says the trademark system might not seem like an obvious starting point for a discussion of the impact of AI on the law or government decision-making. The risks are not a matter of life and death. They are contained within economic and innovation policy. But she says it’s preciously for this reason – “because it is mundane” – that the trademark system is an early candidate for automation, with datasets and technologies (such as content-based image recognition) already moving in that direction.
In her latest research, Professor Weatherall unpacks two alternative approaches to implementing automated decision-making in the trademark system. One approach, she explains, involves trying to make an intelligent/autonomous system do what humans used to do. A second and far better approach, says Professor Weatherall, is to imagine how the trademark system could be reshaped to better serve its purposes by using the capacities of new technologies.
“The goal here is mutual accommodation,” she says. “Simple automation of existing legal processes will not work, because the legal rules are not computable, no matter how good the technology.”
“But that does not mean we should simply throw up our hands and say, ‘computers can’t do it’,” she continued. "Instead, we should look for ways to adjust our legal systems.”
“This is going to require some shift in the law to use technical capacity, as well as some humility on the part of those designing technology to avoid trying to take-over the entire legal system.”
In recent years, AI has been presented as a solution for people without access to a lawyer. While previous inclusion methods have encompassed things like drop-in sessions, telephone advice lines, pamphlets and brochures, DIY kits, video links and websites, AI has the ability to offer much more sophisticated and personal help through ‘access to justice’ mobile applications.
But Professor Simon Rice says the solution isn’t so simple. AI may be inaccessible for the same reasons that access to justice is inaccessible, such as cost, language, age, culture and disability, he explains.
“In the absence of established standards, there is no certainty that an ‘access to justice’ app will itself be accessible,” said Professor Rice.
“A further consideration is the growing awareness of bias in the data on which AI’s algorithms work, and the resulting discriminatory effect,” he continued. “This effect could be especially acute when the users of an app are from the same marginalised and disadvantaged communities that are not represented in the data."
Professor Rice says that human-centred design requires consultation with user communities, and design by and for them. Without careful consideration, we threaten to thwart the effectiveness of AI for access to justice, he said.
I hope that interdisciplinary research on AI, legal need, the user community, design solutions, and standards will lead to more considered, sound and accessible access to justice solutions.
Judicial evaluations are increasingly complemented by AI tools to provide accurate predictive capabilities and assessments of risk, for instance, in bail, sentencing, rehabilitation and parole procedures.
“Assessments have traditionally been the function of human discretion and the intuition of judicial officers, based on clinical assessments, framed by legislation and common law principles, and encapsulating the concept of individualised justice,” explained Dr Carolyn McKay.
She says that while there is a recognised need for responsible, accountable and ethical algorithmic design and instruments, there is also considerable risk when it comes to matters of individual liberty, justice and public safety.
“The use of AI can assist in determining the extent of liberty granted in criminal proceedings,” said Dr McKay. “But the proprietary nature of AI devices means the calculation of the risk score is not very transparent, and unknowable to both the offender and the court.”
Dr McKay says the Geneva conference provided a great opportunity to comprehensively focus on AI from multiple perspectives and world views.
“It’s an evolving area of criminal procedure, and we need to ensure that we understand all of the issues at play to ensure accountability and fair and accurate outcomes,” she explained.
The digitisation of the energy market has led to a fundamental transformation currently underway in the sector globally.
‘Energy 4.0’, derived from the Fourth Industrial Revolution, is changing the traditional roles of energy market regulators, market participants and end-consumers. Characterised by cyber-physical systems, such as smart grids, and the new technologies and structures – including AI, big data, blockchain and smart contracts – the revolution has left the industry in disruption.
Dr Penelope Crossley is researching the risks, as well as priorities, for energy market regulation in an era of rapid technological development.
“The challenge is three-fold: protecting the consumer and supporting innovation, while understanding the potential risks and consequences associated with disruptive technologies,” said Dr Crossley.
“We need to address the complexities of these emerging technologies, with the changing role played by consumer protection, to ensure the continued development of competitively functioning energy markets.”
The conference began in 2013 and has rotated between the four inaugural partner institutions since its inception. Each year the gathering includes a mix of closed and public sessions. The University of Sydney held the conference in 2015. In 2020, it will be held at the Renmin University of China.