• Specific Year
    Any

Perry, Justice Melissa --- "iDecide: Digital pathways to decision" (FCA) [2019] FedJSchol 3

Law Council of Australia

2019 CPD Immigration Law Conference

21 – 23 March 2019, Canberra

iDecide: Digital Pathways to Decision

The Hon Justice Melissa Perry[∗]

Justice of the Federal Court of Australia



I INTRODUCTION

The present is a brave new world in which today’s reality was yesterday’s science fiction. The rate of change in technological innovations to assist humans even in the minutiae of their daily lives has been astronomical. Computers assist us with choices that shape our everyday lives, from the directions we follow when driving, to the books and music we might enjoy. From Siri to self-driving cars, computers are now making decisions without our conscious input, and not merely guiding our choices.

Our growing reliance on automated processes and machine learning is not limited to the private sector. Government departments and agencies increasingly seek to automate their decision-making processes. This trend corresponds with tight fiscal constraints on governments globally and with rapid growth in the volume, complexity, and subject-matter of decisions made by governments affecting private and commercial rights and interests. The drive towards automation has changed the way in which hundreds of millions of administrative decisions are made in Australia each year.[1]

I will focus this morning on the implications for migration law of automated decision-making and machine learning. While serious questions arise about how best to ensure the compatibility of automated decision-making processes with the core administrative law values that underpin a democratic society governed by the rule of law, the overall message is a positive one: if the proper safeguards are in place to ensure that Australia’s commitment to an administrative law system under the rule of law is not compromised, these technologies can promote consistency, accuracy, cost effectiveness and timeliness in the making of decisions by government.

II WHAT IS AN AUTOMATED SYSTEM AND IN WHAT CONTEXTS ARE THEY EMPLOYED?

A What is decision automation?

Broadly speaking, an automated system is a computerised process which uses coded logic or algorithms to make a decision or part of a decision, or to make recommendations.[2] But not all machines are created equal.[3] Automated decision-making systems can vary considerably both in their processing capacities and in the extent to which their operational capabilities are autonomous of, or reliant upon, human input. Decisions can be wholly or partially automated as the process of automation is “characterized by a continuum of levels rather than as an all-or-none concept”.[4] While some systems have human involvement at the point of making a decision, other systems operate autonomously without further human input beyond the programming stage.

Governments currently rely primarily on pre-programmed rules-based automated systems, as wholly “autonomous” decision-making systems remain expensive and have yet to be perfected. However, as technology continues to improve and we implement “machine learning” (or algorithms which learn from patterns in data),[5] we can expect automated processes to be used to make a greater number of qualitative assessments and recommendations, and perhaps even evaluative decisions.

B Australian and international examples of automation in the public sector

Automated decision-making in the Australian public sector is not, of course, a new phenomenon. Indeed, as early as 1994, the Department of Veterans’ Affairs introduced the first automated decision-making in Australia.[6]

Recent applications of this technology include uses as diverse as the installation in NSW of cameras to detect illegal mobile phone use by drivers,[7] and the use of a tool by the Fire Department in New York City to analyse data from other city agencies to predict buildings most vulnerable to fire outbreaks and to prioritise these for inspection.[8]

Relevantly to the migration sphere, examples of automation and AI in government decision-making in use today include the application of algorithms to large datasets such as telecommunications metadata in the gathering of intelligence;[9] the automation of decisions to grant certain types of visas,[10] and the installation of the SmartGate system at Sydney International Airport.

And, as a possible glimpse into the future: Hungary, Greece and Latvia are currently testing a system called “iBorderCtrl”. This system screens non-EU nationals at EU borders using automated interviews, conducted before the person arrives at the border, with a virtual border guard.[11] A person who passes the screening process can cross the border without further incident. However, if the system suspects that a person is untruthful, biometric information is taken at the border and the person is passed on to a human agent. The system operates by employing “deception detection” technology which “analyses the micro-gestures of travelers to figure out if the interviewee is lying”.[12]

III SETTING BOUNDARIES FOR THE USE OF AUTOMATED SYSTEMS: CORE ADMINISTRATIVE LAW VALUES

Despite automated systems being embedded into the workings of government today, ordinary citizens and those advising them may be unaware of their use in decisions affecting them. Yet each time an online tax return is lodged, an application is made for social welfare benefits, or an electronic dynamic form is completed, computer systems are processing the input information and are making decisions as part of the now-indispensable machinery of government. Like sheep being corralled through a pen, these systems effectively close-off irrelevant gateways as the applicant progresses through the matrix of pre-programmed pathways and the process is completed with the assessment, for example, of a welfare benefit, tax refund or grant of a visa. However, input errors, errors in computer programming and in the translation of complex laws into binary code can result in wrong decisions potentially on a widespread scale if undetected. Nor are all decisions by government of such a nature that they can appropriately or fairly be made by automated systems. The use of automated systems by governments therefore raises ethical issues and questions of administrative justice.

The first occasion on which these issues appears to have been considered is in the Australian Administrative Review Council’s (ARC) Report to the Attorney-General on Automated Assistance in Administrative Decision-Making in 2004. The report led to the establishment of a Working Group which launched a Better Practice Guide in 2007 – the first of its kind – to assist Australian agencies in their deployment of automated systems. The report and Better Practice Guide were ground-breaking in their consideration of many of these issues.

As the ARC found, to address these types of issues, the touchstone must be the same core public law values or principles with which all administrative decisions must comply.[13] In particular, it is necessary to consider the measures requiredto ensure the legality of purported actions by public bodies; to guard against the erosion of procedural fairness and administrative justice; and to safeguard the transparency and accountability of government decisions by the provision of reasons and effective access to merits and judicial review.[14]

Yes, it is true to say that the current technological landscape has substantially changed since 2004.[15] Nevertheless, these are universal values essential to a society governed by the rule of law and they remain especially relevant to new technologies.

A Legality

(1) Need for specific authority

Turning to the first of these values – legality – those who purport to exercise public powers of any nature must be authorised to do so as a matter of law. When it comes to the State, a positive justification is required for legal action affecting rights and interests. This principle constitutes an aspect of the overriding principle of legality. Equally, if an automated system is being utilised to make part or all of a decision, the use of that system must be authorised by law.

It cannot be assumed that a statutory authority vested in a senior public servant which extends by implication to a properly authorised officer, will also extend to an automated system; nor that authority to delegate to a human decision-maker will permit “delegation” to an automated system. Authority to use such systems should be transparent and express.

Instances of express authority to employ automated systems are becoming more frequent. Of particular relevance is s 495A(1) of the Migration Act 1958 (Cth) (Migration Act), which provides that the Minister may “arrange for the use, under the Minister’s control, of computer programs for any purposes for which the Minister may, or must, ... make a decision; or exercise any power ...”. Another example is in s 56A of the Australian Passports Amendment (Identity-Matching Services) Bill 2018.[16]

Nonetheless it is by no means clear that the issue is being dealt with comprehensively. The concept of “delegating” a decision to an automated system, in whole or in part, raises a number of unique problems. For example:

  • Who is the “decision maker”?
  • To whom has authority has been delegated, if that is indeed the correct analysis? Is it the programmer, the policy maker, the human decision-maker, or the computer itself?
  • Is the concept of delegation appropriately used in this context at all? After all, compared to human delegates, can a computer programme ever truly be said to act independently of its programmer or of the relevant government agency?
  • What if a computer process determines some, but not all, of the elements of the administrative decision? Should the determination of those elements be treated as the subject of separate decisions from those elements determined by the human decision-maker?

An examination of the statute books reveals an increasing number of provisions addressing these issues by such mechanisms as deeming a decision made by the operation of a computer programme to be a decision made by a human decision-maker. An example is s 495A(2) of the Migration Act which provides that the Minister is essentially deemed to have made a decision or exercised a power that was made or exercised by the use of a computer program.[17] Nonetheless such deeming provisions require acceptance of highly artificial constructs of decision-making processes. More sophisticated approaches may need to be developed as these issues come to be litigated in the courts and these provisions fall to be construed and applied.

A related, but no less important, question is whether an automated system’s “decision” is a decision in the relevant legal sense. This issue arose in a decision of the Full Federal Court in Pintarich v Commissioner of Taxation in 2018.[18] In Pintarich, a letter had been created by a process involving “deliberate interactions between [an officer of the ATO] and an automated system designed to produce, print and send letters to taxpayers”.[19] Despite the taxpayer paying the lump sum on the letter,the ATO continued to charge interest and indeed sent a further letter with increased liability.

A majority of the Full Court (Moshinsky and Derrington JJ) dismissed the appeal, holding that there had been “no decision” under s 8AAG of the Taxation Administration Act 1953 (Cth) on the application to waive the general interest charge.[20] The majority held that in order for there to be a decision, there must be a mental process of reaching a conclusion and an objective manifestation of that conclusion.[21] On the evidence before the primary judge, the majority found that this had not occurred. Thus, the ATO was not bound by the letter, and was able to make a subsequent decision increasing the taxpayer’s liability.

However, Justice Kerr in dissent said:

It would turn on its head fundamental principles of administrative law if a decision maker was entitled unilaterally to renounce as ‘not a decision’ (and not even a purported decision) something he or she had manifested in the form of a decision by reason of a distinction between their mental process and the expression of their mental processes.[22]

His Honour further observed that “ ... where an authorised decision-maker’s overt act manifests as a decision on the basis of subject matter ... before him or her ... I am unpersuaded that this Court is entitled to conclude that neither a decision nor a purported decision has been made.[23]

While the majority observed that “the circumstances of [the] case are quite unusual” since the letter that was generated did not reflect the authorised officer’s intentions,[24] there was an absence of expert evidence before the Court on this issue. The assumption that such cases are unusual, with respect, cannot necessarily be made, as the technology involved is designed specifically to be used on a very large scale.

As Miller noted in her 2018 comment on the Pintarich case, evidence about how the template bulk issue letter was generated was crucial to the majority’s decision which relied on the findings of fact by the primary judge.[25] The case emphasises the need for transparency in how decisions are made, both from the agency’s internal auditing perspective and from the perspective of the person whose rights are affected. It is imperative, in other words, that agencies explain to individuals whether and how technology was involved in the making of decisions.

(2) Lost in translation: from law to code

Questions of legality posed in the context of automated systems go beyond identifying the source of the authority for their use. One of the greatest challenges is to ensure accuracy in the substantive law applied by such processes.

In any process of translation, shades of meaning may be lost or distorted. In our increasingly culturally diverse society, the challenge of ensuring the accurate and fair interpretation of proceedings in a court or tribunal into different human languages is confronted daily. Yet the failure to achieve effective communication by reason of inadequate interpretation can result in a hearing that is procedurally unfair and may, in reality, be no hearing at all.

The rise of automated decision-making systems raises an equivalent but potentially more complex question of what may have been lost or altered in the process of digital translation. Computer programmers effectively assume responsibility for building decision-making systems that translate law and policy into code. Yet computer programmers are not policy experts and seldom have legal training. How can we be sure that complex, even labyrinthine, regulations are accurately transposed into binary code? The artificial languages intelligible to computers, as one commentator has explained, have a more limited vocabulary than their human counterparts.[26] Further, laws are interpreted in accordance with statutory presumptions, and meaning is affected by context. These are not necessarily simple questions and the potential for coding errors or distortions of meaning is real.

Added to this, the law is dynamic. Agencies must constantly and vigilantly ensure that automated systems are up to date to reflect policy changes and legislative amendments in a timely and accurate manner, as the failure to do so again risks decisions being unlawfully made. Such systems also need to be kept up to date while maintaining the capacity to apply the law as it stands at previous points in time for decisions caught by transitional arrangements. The scale of the issue can be illustrated by the frequency of amendments to the Migration Act and its associated regulations, legislative instruments, and policy manuals.

Against this, it is no coincidence that the government agencies relying most heavily upon automated processes are the same agencies that apply the most complex, intricate and voluminous legislation. It is here that potential gains in efficiency stand to be achieved. Yet it must also be borne in mind that, while the strength of automation lies in its capacity to deliver greater efficiencies of scale, this is also its “Achilles heel”.

Programming errors may be replicated across many thousands of decisions undetected, as occurred in the United States when an error in the Benefits Management System in Colorado caused hundreds of thousands of wrongly calculated Medicaid, welfare and benefits decisions to issue.[27]

We must also be cautious of the human tendency to trust the reliability of computers.[28] In biomedical research, artificial intelligence applied to large data sets has been said to lead to inaccurate discoveries because the machine learning is “designed to always make a prediction”, rather than admit that its result is “I don’t know”.[29] Legal advisers and decision-makers must therefore be alert to the risk of assuming the correctness of information provided by automated processes and to take steps to ensure an understanding of the way in which such material has been produced. It may not be enough to assume that the “bottom line” is correct.

These considerations highlight the importance of lawyers being embedded in the design, maintenance and auditing of software applied in these kinds of decision-making processes, and the legislative frameworks within which they operate. In a society governed by the rule of law, administrative processes need to be transparent and accountability for their result, facilitated. Proper verification and audit mechanisms need to be integrated into the systems from the outset, and appropriate mechanisms put in place for review in the individual case by humans.

B Substantive fairness: when is it appropriate to use automated systems?

So when is it appropriate to use automated systems? I have elsewhere considered these issues in some detail so will make only a couple of brief points.[30] Automated decision-making systems are grounded in logic and rules-based programs that apply rigid criteria to factual scenarios. Importantly, they respond to input information entered by a user in accordance with predetermined outcomes. By contrast, many administrative decisions require the exercise of a discretion or the making of an evaluative judgment. For example, whether a person is “of good character”, or whether they pose a “danger to the Australian community”.[31]

It is not difficult to envisage that the efficiencies which automated systems can achieve, and the increasing demand for such efficiencies, may overwhelm an appreciation of the value of achieving substantive justice for the individual. In turn this may have the consequence that rules-based laws and regulations are too readily substituted for discretions in order to facilitate the making of automated decisions in place of decisions by humans. The same risks exist with respect to decisions which ought properly to turn upon evaluative judgments. Legislative amendments directed towards facilitating greater automation by requiring the application of strict criteria in place of the exercise of a discretion or evaluative judgment, should therefore be the subject of careful scrutiny in order to protect against the risk that the removal of discretionary or evaluative judgments may result in unfair or arbitrary decisions.

C Machine learning and automation bias

One answer to the concern about the inability of automated decision-making systems to exercise discretions and make evaluative judgments is to replace those systems with technology that can potentially do those things. Machine learning is an example of potential technology of this kind.

However while potentially equipped to make such decisions, the question must still be asked as to the appropriateness of employing that technology in the particular context and the extent to which humans should still be in the loop. Furthermore, machine learning technology should not be employed so as to exclude a person from being able to fully consider the merits of a decision, particularly in relation to a decision that affects substantive rights.

What is machine learning?

What then do we mean by machine learning? In an insightful article published in the Australian Law Journal in 2017 which I would highly commend, Professor Lyria Bennett Moses explains that with machine learning:[32]

The computer “learns” from correlations and patterns in historic data. This can be done through supervised learning, where a human provides the machine with outputs for a set of inputs in a “training set”, or through unsupervised learning, where the computer alone identifies clusters and regularities within data. Once correlations, patterns or clusters are identified, the computer is able to classify new inputs (for example, into high and low risk categories), provide outputs for new inputs (for example, risk scores ... ) or place new data into established clusters (for example, different types of offenders or documents). Machine learning acts intelligently in the sense that it learns over time, and can thus be responsive to feedback, and in the sense that the patterns learnt yield useful predictions or insights. However, machine learning does not follow the same logical inference paths as would a human expert making the same prediction; its logic differs from doctrinal reasoning.

Machine learning technology can, in the right circumstances, potentially be used to make evaluative decisions in place of humans. For example, in the UK, police at the Durham Constabulary have spent the past three years testing a computer program called the Harm Assessment Risk Tool (Hart) that claims to adopt a data-based approach to predicting recidivism. The tool draws on five years’ of data of people taken into custody in Durham and their likelihood of reoffending. The model is trained using a machine learning technique known as “random forest”.[33]

Machine learning, however, is also subject to bias. While it may seem odd to speak of bias in the context of machines, “[p]rogrammers can build matching algorithms that have biased assumptions or limitations embedded in them. They can unconsciously phrase a question in a biased manner.[34] Furthermore, biases may become embedded and reinforced through the process of machine learning.

For example, allegations of algorithmic bias have arisen in the United States in relation to algorithms used to offer jobs, determine loan applications and rank schoolteachers.[35] Predictive policing algorithms are commonly used in the US and have been publicly criticised for demonstrating algorithmic racial and other biases.[36]

An example is the US predictive policing tool called COMPAS and its application to a Mr Eric Loomis.[37] COMPAS has been used not only to assess the risk of reoffending, but also to influence sentencing.[38] In Wisconsin in 2013, Mr Loomis pleaded guilty to a drive-by shooting. The Court ordered a standard pre-sentencing investigation report, which involved the use of the COMPAS Artificial Intelligence risk-assessment tool that analysed data from surveys, interviews and public records, to offer an assessment of how likely Mr Loomis was to reoffend. COMPAS labelled Mr Loomis as high risk, and the trial court considered this and other factors in sentencing Mr Loomis to the maximum available sentence.

Mr Loomis appealed to the Wisconsin Supreme Court. The Court expressed concerns about the use of COMPAS in sentencing and noted that “COMPAS has garnered mixed reviews in the scholarly literature”.[39] However, it concluded that the defendant’s right to due process was not violated. Nonetheless the Court acknowledged that Mr Loomis could not challenge the process of calculating his risk assessment score, because the method was a trade secret of the developer company.[40]

As noted by Professor Bennett Moses, race is a particular problem for “risk assessment” algorithms in the United States.[41] An analytical investigation into machine bias by ProPublica showed that black defendants were wrongly labelled as future criminals at almost twice the rate as white defendants.[42] Black defendants were more likely to be assessed as high risk than white defendants, even accounting for other factors such as criminal history, recidivism history, age, and gender.

Excluding race as a variable does not provide a satisfactory solution, as there are many other variables correlated with race such as socio-economic variables, education levels and where a person lives.[43] Removing all such variables would therefore render the algorithm ineffective.

It is also important to emphasise that humans may not be able to predict the results of applying machine learning technology, or understand how certain results were reached, bearing in mind that machine learning allows a computer to teach itself something which it has not been taught by humans. For example, in 2018, scientists from the University of California, Berkeley and Google Brain (a research division of Google) developed an artificial intelligence system whereby a four-legged robot taught itself to walk across a flat surface, over obstacles such as wooden blocks, and up slopes and steps – none of which were present when the AI was trained.[44]

Moreover, when faced with two identical decisions some time apart, machine learning technology may produce a different outcome on the second occasion that the issue arises due to its “learning” process. This therefore potentially undermines the consistency in administrative decisions which our system of administrative law system seeks to achieve.

D International responses to automated decision-making and machine learning

Regulators abroad have for some time been considering the issues arising from automated decision-making and machine learning. In the EU, the General Data Protection Regulation art 22 requires government agencies and private entities to disclose when decisions are made using automated decision-making, and provides individuals with the ability to object to that use, or request human intervention.[45]

Two recent examples of regulation of automated decision-making systems are worth highlighting as they overlap with the issues I have earlier discussed. On 4 March 2019, the Canadian Minister of Digital Government announced the launch of a Directive on Automated Decision-Making. This mandate aims to guide how AI can assist with administrative decisions in government.[46] The Directive states that its objective is “to ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law”.[47] In furtherance of this aim, the Directive imposes requirements on the use of Automated Decision Systems such as an Algorithmic Impact Assessment,[48] requirements for notice, “explanations after decisions”, the release of source code, and quality assurance.[49]

Similarly, in July 2018, the City of New York created an Automated Decision Systems Task Force – the first of its kind in the United States – to study issues of accountability and transparency in the use of algorithms in city government.[50] The task force is due to produce its final report in December 2019. Among other things, the report will:

... recommend procedures so that people affected by an algorithmic decision can request an explanation upon what the decision was based” and “explain the development and implementation of a procedure in which the city may determine if an automated decision system used by a city agency ‘disproportionately impacts persons based upon age, race, creed, ... religion, ... gender, disability, marital status ... sexual orientation ... or citizenship status’.[51]

IV CONCLUSION

In conclusion, our enthusiasm to embrace innovation through digital technologies and the efficiencies which they bring must be tempered by the setting of proper limits and by checks and balances. It cannot be denied that employing automated processes can promote consistency, accuracy, cost effectiveness and timeliness in the making of decisions by government. But it is necessary also to ensure that computer generated information and digital processes are approached with a questioning mind and that safeguards are in place to ensure that Australia’s commitment to an administrative law system under the rule of law is not compromised. In this, the significance of the role to be played by lawyers, academics and the courts is vital.


[∗] LLB (Hons, Adel), LLM, PhD (Cantab), FAAL. The author expresses her particular thanks to Wee-An Tan for his invaluable assistance with research and preparation of this paper. This address is a revised and updated version of ‘iDecide - Administrative Decision-Making in the Digital World’ (2017) 91 Australian Law Journal 29. That article in turn drew upon other presentations by Justice Perry, including in particular a paper presented by Justice Perry and co-authored with Alexander Smith at the Inaugural International Public Law Conference, University of Cambridge, Centre for Public Law, 15-17 September 2014.

[1] Dr Melissa Perry QC, ‘Administrative Justice and the Rule of Law: Key Values in the Digital Era’ (Paper presented at 2010 Rule of Law in Australia Conference, Sydney, 6 November 2010).

[2] Australian Government, Automated Assistance in Administrative Decision-Making: Better Practice Guide (2007) 4 <http://www.ombudsman.gov.au/__data/assets/pdf_file/0032/29399/Automated-Assistance-in-Administrative-Decision-Making.pdf> .

[3] William Marra and Sonia McNeil, ‘Understanding “The Loop”: Regulating the Next Generation of War Machines’ (2012) 36(3) Harvard Journal of Law and Public Policy 1139, 1149.

[4] Raja Parasuraman and Victor Riley, ‘Humans and Automation: Use, Misuse, Disuse, Abuse’ (1997) 39(2) Human Factors 230, 232.

[5] Dominique Hogan-Doran, ‘Computer Says “No”: Automation, Algorithms and Artificial Intelligence in Government Decision-Making’ (2017) 13 The Judicial Review 1, 23.

[6] Robin Creyke, ‘Current and Future Challenges in Judicial Review Jurisdiction: A Comment’ [2003] AIAdminLawF 10; (2003) 37 AIAL Forum 42, 43.

[7] See Centre for Road Safety, Pilot of Mobile Phone Detection Cameras Set to Start Early January 2019 (29 January 2019, Transport for NSW) <https://roadsafety.transport.nsw.gov.au/stayingsafe/mobilephones/technology.html>.

[8] Michael Stiefel, New York Creates Task Force to Examine Automated Decision-Making (31 July 2018, InfoQ) <https://www.infoq.com/news/2018/07/NYC-taskforce-automated-decision>.

[9] Data to Decisions Cooperative Research Centre, ‘Big Data Technology and National Security: Comparative International Perspectives on Strategy, Policy and Law’ (Report, June 2018) 17-18.

[10] Peter Papadopoulos, ‘Digital Transformation and Visa Decisions: An Insight into the Promise and Pitfalls’ (Presentation at the 2018 AIAL National Administrative Law Conference, 28 September 2018) 4.

[11] AlgorithmWatch, ‘Automating Society: Taking Stock of Automated Decision Making in the EU’ (Report, January 2019) 37-38.

[12] Ibid 37.

[13] See Administrative Review Council, Automated Assistance in Administrative Decision Making, Report No 46 (2004); see also John McMillan, ‘Automated Assistance to Administrative Decision-Making: Launch of the Better Practice Guide’ (Paper presented at seminar of the Institute of Public Administration of Australia, Canberra, 23 April 2007).

[14] See Justice Melissa Perry, ‘iDecide: Administrative Decision-Making in the Digital World’ (2017) 91 Australian Law Journal 29, 31, citing Administrative Review Council, Automated Assistance in Administrative Decision Making: Report to the Attorney-General, Report No 46 (2004).

[15] Hogan-Doran, above n 5, 23.

[16] Kerry Weste and Tamsin Clarke, ‘Human Rights Drowning in the Data Pool: Identity-Matching and Automated Decision-Making in Australia’ (2018) 27 Human Rights Defender 25, 27.

[17] See Hogan-Doran, above n 5, 7. See also s 7C(2) of the Therapeutic Goods Act 1989 (Cth) which purports to deem a decision made by the operation of a computer program to be a decision made by the Secretary.

[18] [2018] FCAFC 79 (Pintarich). See Katie Miller, ‘Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79: Accidents in Technology-Assisted Decision-Making’ (2018) 25 Australian Journal of Administrative Law 200, 201 ff.

[19] Pintarich [2018] FCAFC 79, [62] (Kerr J).

[20] Pintarich [2018] FCAFC 79, [144], [153].

[21] Miller, above n 18, 202.

[22] Pintarich [2018] FCAFC 79, [64].

[23] Ibid [56].

[24] Ibid [152].

[25] Miller, above n 18, 202-203.

[26] Danielle Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1261.

[27] Cindi Fukami and Donald McCubbrey, ‘Colorado Benefits Management System: Seven Years of Failure’ (2011) 29 Communications of the Association for Informational Systems 5; ibid 1256.

[28] Linda Skitka, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991, 992-993.

[29] See Clive Cookson, ‘Scientist Warns against Discoveries Made with AI’, Financial Times (online), 16 February 2019 <https://www.ft.com/content/e7bc0fd2-3149-11e9-8744-e7016697f225>.

[30] See Perry, above n 14, 33-34.

[31] Migration Act 1958 (Cth) ss 501(6)(c), 36(1C)(b).

[32] Lyria Bennett Moses, ‘Artificial Intelligence in the Courts, Legal Academia and Legal Practice’ (2017) 91 Australian Law Journal 561, 563.

[33] Patricia Nilsson, ‘UK Police Test if Computer can Predict Criminal Behaviour’, Financial Times (online), 6 February 2019 <https://www.ft.com/content/9559efbe-2958-11e9-a5ab-ff8ef2b976c7>.

[34] Danielle Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1262.

[35] Will Knight, Biased Algorithms Are Everywhere, and No One Seems to Care (12 July 2017, MIT Technology Review) <https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/>; see also Madhumita Murgia, ‘How to Stop Computers being Biased’, Financial Times (online), 13 February 2019 <https://www.ft.com/content/12dcd0f4-2ec8-11e9-8744-e7016697f225>.

[36] Ali Winston, Palantir has Secretly Been Using New Orleans to Test its Predictive Policing Technology (27 February 2018, The Verge) <https://www.theverge.com/2018/2/27/17054740/palantir-predictive-policing-tool-new-orleans-nopd>.

[37] See Lyria Bennett Moses and Anna Collyer, ‘When and How Should We Invite Artificial Intelligence Tools to Assist with the Administration of Law? A Note from America’ (2019) 93 Australian Law Journal 176, 177-178.

[38] Nilsson, above n 33. See also Nigel Stobbs, Dan Hunter and Mirko Bagaric, ‘Can Sentencing be Enhanced by the Use of AI?’ (2017) 41 Criminal Law Journal 261.

[39] State v Loomis, 371 Wis 2d 235, 287 (2016).

[40] Ibid 276.

[41] Bennett Moses, above n 32, 571.

[42] Bennett Moses and Collyer, above n 37, 177 n 4 and accompanying text.

[43] Bennett Moses, above n 32, 571.

[44] Kyle Wiggers, This AI Teaches Robots How to Walk (31 December 2018, VentureBeat) <https://venturebeat.com/2018/12/31/this-ai-teaches-robots-how-to-walk/>.

[45] Regulation (EU) 2016/679 (General Data Protection Regulation) [2016] OJ L 119/1, art 22.

[46] Treasury Board of Canada Secretariat, ‘Ensuring Responsible Use of Artificial Intelligence to Improve Government Services for Canadians’ (News release, 4 March 2019) <https://www.canada.ca/en/treasury-board-secretariat/news/2019/03/ensuring-responsible-use-of-artificial-intelligence-to-improve-government-services-for-canadians.html>; see also Isabelle Kirkwood, New Federal Directive Looks to Increase Automated Decision Making in Government (5 March 2019, Betakit) <https://betakit.com/new-federal-directive-looks-to-increase-automated-decision-making-in-government/>.

[47] Directive on Automated Decision-Making (2019) <http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592> .

[48] Ibid cl 6.1.

[49] Ibid cll 6.2.1, 6.2.3, 6.2.6, 6.3.

[50] ‘Mayor de Blasio Announces First-in-Nation Task Force to Examine Automated Decision Systems used by the City’ (16 May 2018, NYC) <https://www1.nyc.gov/office-of-the-mayor/news/251-18/mayor-de-blasio-first-in-nation-task-force-examine-automated-decision-systems-used-by>.

[51] Stiefel, above n 8.

Download

No downloadable files available