How to Evaluate AI Security in Legal Tech: A Guide for Legal Professionals

I hope you enjoy reading this blog post. If you want our team to manage your growth marketing strategies, just […]

I hope you enjoy reading this blog post. If you want our team to manage your growth marketing strategies, just click here.

by Melissa Rogozinski

A version of this article first appeared in Law Journal Newsletter on November 1, 2024.  Download our AI Security Questionnaire for use in meetings with AI providers.

What Law Firms Need to Know Before Trusting AI Systems with Confidential Information

As artificial intelligence (AI) continues to revolutionize industries, the legal profession is no exception. Every authority agrees about the transformative impact AI is having on legal services. As we noted in a previous article, in-house counsel are increasingly using AI for critical tasks like legal research, contract review, and even building chatbots for handling frequently asked questions.

However, as law firms and corporate legal departments adopt AI technologies to streamline their practices, they must face the inevitable question: How secure are these AI systems?

While AI tools offer convenience, they also present unique challenges and risks, particularly when it comes to handling sensitive legal data and confidential client information. Before adopting any AI-based service or product, law firms must ensure that the AI they interact with is not only functional but secure.

As AI continues to reshape the legal field, law firms must adopt a proactive approach to assessing the security and reliability of AI systems. In a profession where confidentiality is paramount, failing to address security concerns could have disastrous consequences. It is vital that law firms and those in related industries ask the right questions about AI security to protect their clients and their reputation.

The Rise of AI in Legal Services: A Double-Edged Sword

The potential of AI in the legal field is immense. Law firms are deploying AI technologies for everything from analyzing large volumes of legal documents to automating routine tasks such as contract management and compliance monitoring.

But with these advancements comes a new wave of challenges. AI models, particularly large language models (LLMs), process data in ways that can be difficult to understand. Legal professionals need to know that these models raise concerns about privacy, data retention, and model biases. When AI is used to handle sensitive information, such as client communications or privileged data, the stakes become higher. An AI model that inadvertently exposes sensitive data or is vulnerable to external attacks could lead to breaches of confidentiality and legal liability.

Governments and bar associations throughout the world are adopting regulations to ensure AI users in the legal community use due diligence to educate themselves about how AI uses and stores confidential data, personal identification information, and other protected material.

To protect your clients’ data from unauthorized disclosure and your firm from liability or disciplinary actions, these are critical questions you should be asking about the AI systems and products your firm is using or considering onboarding.

Richard Robbins, Director of Applied AI at Reed Smith, who contributed to the research for this article and the accompanying AI Security Questionnaire, states, “The framework outlined in this article provides a thoughtful approach to evaluating information security in connection with the use of AI in legal settings. As AI continues to shape legal practice, it’s crucial for law firms to not only adopt these tools to deliver higher value work to clients but to ensure they meet the highest standards of security. By paying careful attention to the themes in this article, firms can use AI to enhance their services while safeguarding client data and maintaining the trust that is fundamental to the attorney-client relationship.”

Critical Questions Law Firms Should Ask About AI Systems (and Why It’s Important)

  1. Model Type and Architecture

  • What type of model is the vendor using?
    Law firms need to know which model (ex. GPT-4o, Claude Sonnet 3.5, Anthropic) and, perhaps more importantly, the information architecture around it.  Each provider structures the underlying information architecture differently, introducing distinct complexities and security protocols, especially with integrations.
  • Where is the model hosted?
    It’s essential to determine where the AI model is being hosted to assess the level of security and data privacy in that jurisdiction. This information helps law firms understand potential risks related to cross-border data transfers.
  • Can you identify the models being used if open-source or proprietary systems are involved?
    Transparency about the models and their creation is key. Law firms must understand how the AI model was developed and whether it adheres to security best practices.
  1. Data Usage and Training

  • Will you train your offering on our data?
    Law firms should be cautious about allowing AI vendors to use their data for training purposes. If data used to train models is not properly anonymized, it could lead to unintended breaches of confidentiality.
  • How will our data be used during training?
    If the AI vendor will train models using the firm’s data, it is critical to understand how the data will be used and whether it could potentially be accessible to third parties.
  • Do the underlying models use our prompts or information for training or customization?
    This question addresses whether the AI model is constantly evolving and learning from inputs. If so, law firms must know whether their confidential data will be used in training and how to control or limit that process.
  • If using retrieval-augmented generation (RAG), how do you prevent prompt injection or unauthorized access to the dataset?
    Law firms should inquire about safeguards against prompt injection attacks, which could expose sensitive data by manipulating AI prompts.
  1. Data Retention, Privacy, and Security

  • How do you protect our data? Is it encrypted in transit and at rest, and who controls the encryption keys?
    Data encryption is non-negotiable in legal contexts. Law firms should ask whether their data is encrypted both while it’s being transferred and when it’s stored. They should also understand who holds the encryption keys to prevent unauthorized access.
  • What are your data retention policies?
    Law firms need clarity on what data is retained, who decides what data is kept, and how long it is stored. This affects the firm’s ability to comply with data privacy laws and its own record-keeping policies.
  • Does anyone in your organization view our prompts or submitted data during our use of the product?
    Some AI providers may require human review of prompts and submitted data, which could be a security risk. Firms must ask whether this occurs and, if so, for what purpose.
  1. Agreements and Compliance

  • Describe the agreement with your provider regarding prompts for training, data retention, and abuse monitoring.
    Law firms must scrutinize agreements with AI vendors, particularly concerning data usage for training and security monitoring. They should ensure that there are clear boundaries around the use of their data and that the vendor offers adequate protection against data abuse.
  • Do you provide indemnification for copyright violations related to our use of your product?
    Given the rise of AI-related copyright disputes, law firms should confirm whether the vendor will provide indemnification if the AI system inadvertently causes copyright infringements through its outputs.
  1. Use Cases & Other Considerations

  • Are there restrictions on the types of information we can use with your product (e.g., PII or protected health information)?
    Firms should confirm whether there are any limitations on using certain types of sensitive data within the AI system, especially concerning personally identifiable information (PII) or protected health information (PHI).
  • How does your product address hallucinations and bias?
    AI models are not immune to bias, hallucinations and other forms of inaccuracy. Law firms should inquire about how the AI provider mitigates these risks and ensures fairness and transparency in its outputs.

Prioritizing Security in an AI-Driven Legal Environment

By asking the right questions, legal professionals can ensure that they are not only leveraging AI to improve efficiency but also safeguarding their clients’ confidential information. AI security is not just a technical issue—it’s a critical part of maintaining trust in the attorney-client relationship. Before integrating AI into their practice, law firms must understand the risks, assess potential vulnerabilities, and ensure they are working with providers who prioritize security at every level.

Learn More about Growth Strategies
Tags: , , , ,
Scroll to Top