January 8, 2021

To ‘Bot’ or Not to ‘Bot’- Does using AI to aid lawyers cause ethical transgressions?

A recent cog in the wheel of modern technological development, primarily in the legal sector, is the introduction of Artificial Intelligence [‘AI’] and its myriad uses in the field. The increasing dissemination of ‘Big Data’ has led to the rapid growth of intelligent algorithms that have formed a foundation of technological prowess in the legal field. AI has made its own niche in optimizing an attorney’s work by assisting in electronic discovery, litigation/predictive analysis, contract review, contract management, document review, and bulk-load research. This paradigm-shift, though immensely advantageous, has led to certain gaps in the implication of professional ethics.

This article takes into account the steady intrusion of AI in our lives today, as I attempt to assess how lack of legal guidance and standards could be the cause of slow adaptation of AI in the legal field and the ethical issues that legal guidance could address.

What are the primary ethical concerns?

The predominant ethical transgression that arises from the immense use of AI, lies in the procedure to manifest such a machine and mold it into working on a particular subject matter or job. Broadly, training AI includes visual recognition, speech recognition, and natural language processing, all of which require a tremendous amount of data to aid in the machine learning and predictive results. For example, the process of training a facial recognition AI includes teaching it to identify or ‘map’ faces using key features and measurements to read the distances between nodal features of the face from a massive database of photographs and pictures compiled to aid its training. The obvious ethical ramification of face recognition leads us to question the transparency of the collection and use of such data, and the acquisition of consent from the data subjects. The General Data Protection Regulation and other privacy legislations, although seemingly vague in their interpretation regarding training AI, set certain blanket norms for handling and processing personal data extracted and generated by AI. These norms should ideally be upheld by lawyers who feed data into AI technology and train such technology to perform certain legal tasks.

Other concerns such as confidentiality, communication regarding its use, request for consent, supervision, authorized and legal mechanisms, etc. should be appropriately addressed and accorded, to ensure limited gaps in ethical and professional transgression in the legal field, and act as an effective tool in legal-aid and assistance.  

Setting aside data privacy, should lawyers consider other qualms?

Data privacy is one of the hurdles amongst other violations that tend to occur while using AI, including the use of robots. Running on information collected from the ‘outside world’, robots may act in a way that their creators could not have predicted, however, predictability is crucial in reaching legal conclusions. Another issue is determining responsibility for a breach or damage caused by an unpredictable bot, which results from the lack of properly identifying the causal nexus between the bot’s conduct and the actual damage caused. This question of distributed responsibility is the preliminary reason why an ethical guideline regarding the use and liability of AI is of relevant necessity.

Further, in training the AI, the explicit and implicit personal biases, prejudices, and opinions of the people feeding it information is easily read and processed by it, which could negate the argument of AI as an ‘objective’ and ‘unbiased’ tool in legal research and predictive analysis.

Do countries have laws governing the use of AI in the legal field?

The void in policies regarding AI has led to the reinterpretation of existing laws to include AI, or the development of model laws to govern AI. The United State of America’s American Bar Association Model Rules of Professional Conduct [‘ABA Rules’] particularly, has been interpreted to read its standard clauses to include its applicability to AI and ‘bot lawyers’, pressing major ethical guidelines on the same. The ‘Intelligent Robot Development and Dissemination Promotion Law’ has been in place in South Korea since 2008. Further, the EU Commission, in 2017 and 2019, published two reports titled ‘European Civil Law and Robotics’, and ‘Ethics Guidelines for Trustworthy AI’, laying down ethical and legal provisions for AI to be properly streamlined. In suggesting a new legal entity to impose a legal identity to AI, known as ‘e-person’, these guidelines make it easier to govern AI and technology than ever before. Testing out the validity of these rules, ROSS, the first AI lawyer bot despite saving lawyers an estimated twenty to thirty hours per case, deferred situations where difficult issues of professional discretion arose. If the situation of such professional quandaries arose for a human attorney, the ABA Rules in any similar circumstance would direct lawyers and contract managers to simply resolve these difficulties ‘through the exercise of sensitive professional and moral judgment’.

Conclusion

The ethical codes although prevalent in most countries such as India, the United States of America, and the United Kingdom, do not substantiate any legal requirement to specifically cover the use of AI within the field. Deploying AI systems that people do not fully understand renders it extremely difficult to construct and enforce a code of ethics. The increasing need for AI and new technology brings a significant challenge for the rule of law and contemporary ethics, demanding a deep reflection on morality, governance, and regulation.

References

¹ Ralph C. Losey, A Survey of Emerging Issues in Electronic Discovery: Predictive Coding and the Proportionality Doctrine: A Marriage Made in Big Data, 26 REGENT UNIVERSITY U.L. REV. 7, 21 (2013-2014)

² The Ethical and Legal Issues of Artificial Intelligence - ModernDiplomacy https://moderndiplomacy.eu/2018/04/24/the-ethical-and-legal-issues-of-artificial-intelligence/

³ New perspectives on ethics and the laws of artificial intelligence https://policyreview.info/articles/analysis/new-perspectives-ethics-and-laws-artificial-intelligence

⁴ Steve Lohr, A.I. Is Doing Legal Work. But It Won’t Replace Lawyers, Yet., N.Y.TIMES (Mar. 19, 2017), http://www.nytimes.com/2017/03/19/technology/lawyers-artificialintelligence.html?_r=0

⁵ Model Rules of Prof’L Conduct pmbl. ¶ 9 (AM.BAR ASS’N 2011)

⁶ Towards a Code of Ethics in Artificial Intelligence with Paula Boddington, Davey, Future of Life Institute, at https://futureoflife.org/2017/07/31/towards-a -code-of-ethics-in-artificial-intelligence/

To ‘Bot’ or Not to ‘Bot’- Does using AI to aid lawyers cause ethical transgressions?

Published on
Jan 8, 2021
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share

A recent cog in the wheel of modern technological development, primarily in the legal sector, is the introduction of Artificial Intelligence [‘AI’] and its myriad uses in the field. The increasing dissemination of ‘Big Data’ has led to the rapid growth of intelligent algorithms that have formed a foundation of technological prowess in the legal field. AI has made its own niche in optimizing an attorney’s work by assisting in electronic discovery, litigation/predictive analysis, contract review, contract management, document review, and bulk-load research. This paradigm-shift, though immensely advantageous, has led to certain gaps in the implication of professional ethics.

This article takes into account the steady intrusion of AI in our lives today, as I attempt to assess how lack of legal guidance and standards could be the cause of slow adaptation of AI in the legal field and the ethical issues that legal guidance could address.

What are the primary ethical concerns?

The predominant ethical transgression that arises from the immense use of AI, lies in the procedure to manifest such a machine and mold it into working on a particular subject matter or job. Broadly, training AI includes visual recognition, speech recognition, and natural language processing, all of which require a tremendous amount of data to aid in the machine learning and predictive results. For example, the process of training a facial recognition AI includes teaching it to identify or ‘map’ faces using key features and measurements to read the distances between nodal features of the face from a massive database of photographs and pictures compiled to aid its training. The obvious ethical ramification of face recognition leads us to question the transparency of the collection and use of such data, and the acquisition of consent from the data subjects. The General Data Protection Regulation and other privacy legislations, although seemingly vague in their interpretation regarding training AI, set certain blanket norms for handling and processing personal data extracted and generated by AI. These norms should ideally be upheld by lawyers who feed data into AI technology and train such technology to perform certain legal tasks.

Other concerns such as confidentiality, communication regarding its use, request for consent, supervision, authorized and legal mechanisms, etc. should be appropriately addressed and accorded, to ensure limited gaps in ethical and professional transgression in the legal field, and act as an effective tool in legal-aid and assistance.  

Setting aside data privacy, should lawyers consider other qualms?

Data privacy is one of the hurdles amongst other violations that tend to occur while using AI, including the use of robots. Running on information collected from the ‘outside world’, robots may act in a way that their creators could not have predicted, however, predictability is crucial in reaching legal conclusions. Another issue is determining responsibility for a breach or damage caused by an unpredictable bot, which results from the lack of properly identifying the causal nexus between the bot’s conduct and the actual damage caused. This question of distributed responsibility is the preliminary reason why an ethical guideline regarding the use and liability of AI is of relevant necessity.

Further, in training the AI, the explicit and implicit personal biases, prejudices, and opinions of the people feeding it information is easily read and processed by it, which could negate the argument of AI as an ‘objective’ and ‘unbiased’ tool in legal research and predictive analysis.

Do countries have laws governing the use of AI in the legal field?

The void in policies regarding AI has led to the reinterpretation of existing laws to include AI, or the development of model laws to govern AI. The United State of America’s American Bar Association Model Rules of Professional Conduct [‘ABA Rules’] particularly, has been interpreted to read its standard clauses to include its applicability to AI and ‘bot lawyers’, pressing major ethical guidelines on the same. The ‘Intelligent Robot Development and Dissemination Promotion Law’ has been in place in South Korea since 2008. Further, the EU Commission, in 2017 and 2019, published two reports titled ‘European Civil Law and Robotics’, and ‘Ethics Guidelines for Trustworthy AI’, laying down ethical and legal provisions for AI to be properly streamlined. In suggesting a new legal entity to impose a legal identity to AI, known as ‘e-person’, these guidelines make it easier to govern AI and technology than ever before. Testing out the validity of these rules, ROSS, the first AI lawyer bot despite saving lawyers an estimated twenty to thirty hours per case, deferred situations where difficult issues of professional discretion arose. If the situation of such professional quandaries arose for a human attorney, the ABA Rules in any similar circumstance would direct lawyers and contract managers to simply resolve these difficulties ‘through the exercise of sensitive professional and moral judgment’.

Conclusion

The ethical codes although prevalent in most countries such as India, the United States of America, and the United Kingdom, do not substantiate any legal requirement to specifically cover the use of AI within the field. Deploying AI systems that people do not fully understand renders it extremely difficult to construct and enforce a code of ethics. The increasing need for AI and new technology brings a significant challenge for the rule of law and contemporary ethics, demanding a deep reflection on morality, governance, and regulation.

References

¹ Ralph C. Losey, A Survey of Emerging Issues in Electronic Discovery: Predictive Coding and the Proportionality Doctrine: A Marriage Made in Big Data, 26 REGENT UNIVERSITY U.L. REV. 7, 21 (2013-2014)

² The Ethical and Legal Issues of Artificial Intelligence - ModernDiplomacy https://moderndiplomacy.eu/2018/04/24/the-ethical-and-legal-issues-of-artificial-intelligence/

³ New perspectives on ethics and the laws of artificial intelligence https://policyreview.info/articles/analysis/new-perspectives-ethics-and-laws-artificial-intelligence

⁴ Steve Lohr, A.I. Is Doing Legal Work. But It Won’t Replace Lawyers, Yet., N.Y.TIMES (Mar. 19, 2017), http://www.nytimes.com/2017/03/19/technology/lawyers-artificialintelligence.html?_r=0

⁵ Model Rules of Prof’L Conduct pmbl. ¶ 9 (AM.BAR ASS’N 2011)

⁶ Towards a Code of Ethics in Artificial Intelligence with Paula Boddington, Davey, Future of Life Institute, at https://futureoflife.org/2017/07/31/towards-a -code-of-ethics-in-artificial-intelligence/

Related Blogs