38
Regarding legal liability, there may be questions on the allocation of responsibility among
suppliers, operators and users of AI and machine learning systems – for example the
responsibility of a manufacturer or distributor of a financial product that is based on third party
data input devices or algorithms.
10
There are difficult liability issues, including the extent to
which humans may be entitled to rely on expert systems in a wide range of settings. Such
liability issues will become increasingly important as artificial agents perform a broader range
of tasks currently performed by humans, with the potential for mistakes and for legal disputes
around damages.
11
Finally, the growth of AI and machine learning applications could lead to cross-border issues.
Currently, the development of these technologies in finance is concentrated in a small number
of countries, while adoption may occur at financial institutions around the world. Regulators
should keep in mind that cross-border supervision, cooperation and investigation and other
regulatory issues may be expected to arise with AI and machine learning applications active
across jurisdictions.
1
Nobuchika Mori (2017), “Will FinTech create shared values?” speech at Columbia Business School conference, May.
2
Defined in EIOPA (2017), “Opinion of the Occupational Pensions Stakeholder Group on JC Big Data,” EIOPA-OPSG-17-
06 15, March, pp. 6-7. See also U.S. Federal Trade Commission (2016), “Big Data: A Tool for Inclusion or Exclusion,”
January, p. 3.
3
See EIOPA (2017); U.S. Federal Register (2017), Vol. 82, No.33, and Bureau of Consumer Financial Protection: Docket
No. CFPB Notice and Request for Information Regarding Use of Alternative Data and Modelling Techniques in the Credit
Process, February 21, 2017 (“CFPB RFI”); European Banking Authority (2017), “Report on innovative uses of consumer
data by financial institutions, June. See also FSB FinTech Issues Group (2017), p. 19.
4
OECD (2013), “Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” July.
5
For instance, Articles 13, 14, and 15 require disclosure of the existence of automated decision-making, including profiling,
referred to in Article 22(1) and (4) and, at least in those cases, “meaningful information about the logic involved,” as well
as the significance and the envisaged consequences of such processing for the data subject.
6
Note that Articles 9, 22 and 24 are all subject to exceptions. See Sandra Wachter, Brent Mittelstadt, and Luciano Floridi
(2017), “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection
Regulation,” International Data Privacy Law, Forthcoming; and Bryce Goodman and Seth Flaxman (2016), “European
Union regulations on algorithmic decision-making and a ‘right to explanation,’” paper presented at 2016 ICML Workshop
on Human Interpretability in Machine Learning (WHI 2016), New York. Wachter et al. argue that these provisions confer
no right to an ex-post explanation of decisions, though ex-post explanations may be crafted through jurisprudence or EDPB
work. Goodman and Flaxman on the other hand argue the law will also effectively create a “right to explanation,” whereby
a user can ask for an explanation of an algorithmic decision that was made about them.
7
Michael Gordon and Vaughn Stewart (2017), “Insights on Alternative Data use on Credit Scoring,” CPFB Law360, May.
8
See Pang Wei Koh and Percy Liang (2017), “Understanding Black-box Predictions via Influence Functions,” Proceedings
of the 34
th
International Conference on Machine Learning, Sydney; Marco Tulio Ribiero, Sameer Singh and Carlos Guestrin
(2016), “’Why Should I Trust You?’ Explaining the Predictions of Any Classifier,” arXiv:1602.04938v3; and Fast Forward
Labs (2017), “New Research on Interpretability,” August.
9
See Bettina Berendt and Sören Preibusch (2014), “Better decision support through exploratory discrimination-aware data
mining: foundations and empirical evidence,” Artificial Intelligence and Law 22 (2): 175-209; Indrė Žliobaitė (2017),
“Measuring discrimination in algorithmic decision making,” Data Mining and Knowledge Discovery 31(4): 1060–1089;
and Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouze and Nuria Oliver (2016), “The Tyranny of Data?
The Bright and Dark Sides of Data-Driven Decision-Making for Social Good,” working paper, December.
10
See EIOPA (2017), pp. 6-7.
11
Laurence White and Samir Chopra (2011), A Legal Theory for Autonomous Artificial Agents, University of Michigan Press,
chapter 4.