top of page
Search

AI and Attorney-Client Privilege: Balancing Innovation with Confidentiality

By Paolina Salas '26

 

During an era of rapid technological development, increasing dependence on technology and artificial intelligence has improved efficiency and convenience across many aspects of life. While these innovations appear to offer limitless benefits, they also raise concerns—particularly regarding the protection of sensitive personal information. As technology evolves in the legal sphere, tensions are rising between reliance on AI in legal practices and the preservation of attorney-client privilege, challenging lawyers' ethical and legal obligations in maintaining confidentiality.


As AI becomes increasingly prevalent in the practice of law—through assisting with document review, legal research, contract analysis, or even predictive legal analysis—questions about the ethical implications of attorneys' diminishing abilities to safeguard privileged information are causes for worry. The conflict between privacy and progress is introduced by the fact that AI relies on the data provided by users through its practical use to inform and enhance its future applications. While traditional data collection methods typically involve passive aggregation of information (gathering of a person’s information across platforms like internet browsing and social media activity, compiled into a single database), (1) there are clear boundaries about what information is off limits. For instance, information about legal matters, medical records, and trade secrets are not available for big data collection. (2) The difference with AI systems is that they are implemented within these sensitive areas, and they need to process this information to achieve their goals.


The problem is not necessarily in processing the information, but the fact that once the information has entered an AI database, it is no longer confidential; it is now included in feedback loops where data collection and model training are continuously refined based on the system’s objective. (3) This is a feature that is inherent to the self-improving nature of AI. (4) This raises concerns that the information could eventually be disclosed or inadvertently revealed in the system's output. (5) Regarding attorney-client privilege, a critical legal question arises: could the accidental sharing of privileged information through AI in legal research, document drafting, or other uses within the legal field constitute a breach of that privilege?


ABA Formal Opinion 512

The legal precedents that will ultimately resolve these questions are still emerging, though they have been anticipated to some extent. The American Bar Association, in its Formal Opinion 512, has already addressed concerns surrounding the confidentiality of client information at a time when lawyers are becoming increasingly dependent on technology.


The opinion states that to use Generative AI (GAI) tools ethically, lawyers must comply with Model Rule 1.6. This rule requires maintaining client confidentiality unless the client gives informed consent, or disclosure is authorized or permitted by exceptions where maintaining confidentiality may enable fraud or allow death/bodily harm to occur. (6) Further, attorneys must take reasonable steps to prevent unauthorized disclosure or access to client information. Lastly, the ABA emphasizes that lawyers must always assess the risks of improper disclosure while using Generative AI, both inside and outside the firm. This includes evaluating whether information from one client could be improperly revealed through the AI to other clients or attorneys within the same firm. Informed consent from the client is necessary when inputting confidential information into GAI, and this consent must be specific and transparent. (7)


While this opinion speaks broadly in ways that do not provide much guidance as to specific courses of action lawyers should take, it appears that requiring informed consent from clients that acknowledges the risks of exposure associated with using AI tools bridges the ethical gap between the lawyer's duty to maintain confidentiality and the practical need to leverage modern technology in legal. However, while amending the ethical dilemma at hand, this solution may have implications that undermine the perceived stability and dependability of attorney-client privilege.


The Limitations of Current Ethical Safeguards

Historically, attorney-client privilege has been absolute. Once an attorney-client relationship is established, any communication or material exchanged between the lawyer and client is protected. This ensures that no information can be disclosed without the client's expressed permission or a specific legal exception. (8) This privilege forms the cornerstone of trust in legal representation by allowing clients to share sensitive information without fear of it being exposed. (9) The concept is also grounded in the assumption that lawyers will control and safeguard such information, using only secure and controlled means to manage client data. However, if the ethical use of AI requires lawyers to explicitly acknowledge the limitations of their ability to fully protect client information, it could undermine the trust at the core of the attorney-client relationship. Even without an actual breach, the process of obtaining informed consent highlights a vulnerability that neither the attorney, client, or AI provider can entirely control.


Even if informed consent is obtained, the possibility of a data breach introduces a new dilemma regarding accountability and responsibility. In these cases, the traditional understanding of Attorney-Client Privilege, which relies on direct human accountability, may no longer apply in the same way. The lack of control any relevant party has over the AI's actions would complicate the determination of who is responsible for the breach, raising profound questions about the future of privilege and liability in an AI-driven legal landscape.

As a result, the solutions described in the American Bar Association’s Formal Opinion do not provide a realistic approach to handling the risk associated with using AI in legal practice.


The EU AI Act: A Step Toward Better Regulation

There is an urgent need for more clarity on legal responsibilities and potential new laws concerning AI to protect all stakeholders involved, from consumers to developers. Questions about liability, such as who should be responsible for harms caused by AI systems and how that responsibility should be shared, remain largely unresolved. Current liability frameworks may not be sufficient to address unique challenges posed by AI, and external legal regimes like the European Union's (EU) AI Act could impact U.S. liability systems. (10)


The EU AI Act is a law that went into effect this year on August 1. The act categorizes AI systems based on their risk levels and provides different regulations for different systems based on their assigned risk levels and the context in which they are used. (11) AI systems designed to be implemented into the legal system for document review, legal research, contract analysis, or predictive legal analysis would likely typically fall under the high-risk category, due to their relationship to legal outcomes and their implications for justice and individual rights. Secondly, as mentioned earlier, these AI systems involve the automated processing of personal data, which can lead to profiling and assessments of individuals' legal situations. Therefore, these algorithms are rightly designated high-risk status systems. Implementing laws with designations like this in the US is critical, as these different statuses recognize and assess the risks of these systems rather than overlooking them for the sake of capitalizing on their benefits.


Another key layer to this law is that the high-risk classification status places full responsibility on the providers of these systems if they make them available to the market. (12) As a result, providers of high-risk AI systems must establish rigorous risk management frameworks, ensure data governance, and enable human oversight, among other obligations. (13) Establishing rules for who may be held accountable—whether it's the developers, users, or lawyers working with the technology—helps protect people whose sensitive information might be exposed. These rules not only make it easier to hold the right people accountable but also push for the creation of safer, more dependable AI systems, especially in critical areas like the legal profession.


While laws like the EU AI Act bring society one step closer to bridging the gap between accelerated technological advancement and lumbering legal frameworks, the question remains: can any system guarantee absolute confidentiality when its core function is based on continuous data processing and learning? New laws and risk management frameworks could help reduce the chances of a breach, but the reality is that AI’s self-learning nature introduces uncertainty that neither developers nor legal professionals can fully control.


Endnotes

  1. "Big Data vs. AI," Scraping Robot, accessed October 18, 2024, https://scrapingrobot.com/blog/big-data-vs-ai/.

  2. "Feedback Loop," C3 AI Glossary, accessed October 18, 2024, https://c3.ai/glossary/features/feedback-loop/.

  3. Ibid.

  4. "Rule 1.6: Confidentiality of Information," American Bar Association, accessed October 18, 2024, https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/.

  5. Institute of Medicine (US) Committee on Regional Health Data Networks. Health Data in the Information Age: Use, Disclosure, and Privacy. Edited by Molla S. Donaldson and Kathleen N. Lohr. Washington, DC: National Academies Press (US), 1994. Chapter 4, "Confidentiality and Privacy of Personal Data." Accessed November 19, 2024. https://www.ncbi.nlm.nih.gov/books/NBK236546/.

  6. "Rule 1.6," American Bar Association, accessed October 18, 2024, https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/. g

  7. Ibid.

  8. "Attorney-Client Privilege," Cornell Law School Legal Information Institute, accessed October 18, 2024, https://www.law.cornell.edu/wex/attorney-client_privilege.

  9. Ibid.

  10. "EU AI Act Explorer," Artificial Intelligence Act, accessed October 18, 2024, https://artificialintelligenceact.eu/ai-act-explorer/.

  11. Ibid.

  12. Ibid.

  13. Ibid.


Comments


  • Instagram
  • LinkedIn

Florida Undergraduate Law Review 2024 | University of Florida

bottom of page