As more and more legal professionals start to use artificial intelligence (AI) for everything from legal research to e-discovery to drafting briefs, it’s generating some headaches for law firms and corporate law departments. The lawyers caught citing fake cases invented by AI are just the tip of the iceberg.
These issues can be avoided by proactively assessing internal AI usage and putting information governance policies in place—but given the rapidly evolving nature of AI, leadership may wonder where to begin.
That’s why we sat down with Tom Corey, a director on Harbor’s Information Governance team. In what follows, Tom discusses how law firms and corporate law departments can effectively manage their AI use through better information governance.
Let’s start with the basics. How do you define information governance in the context of AI, and why is it critical for the development and deployment of AI systems within law firms and corporate law departments?
Information Governance (“IG”) is responsible for managing information across the law firm or organization. This includes records, information, and data. Artificial Intelligence (“AI”) relies on large sets of records, information and data to work. Therefore, for AI to work properly, legally, and ethically, information governance is essential.
What makes IG unique is that it’s one of the few functions in an organization that is truly enterprise-wide, touching everything from HR to compliance to legal. This unique position puts the function of information governance in an ideal position to lead an AI Governance Program within a law firm, government agency, or corporate organization.
An AI Governance program is essential for any organization, whether it is a law firm or corporate organization, that uses or deploys an AI or automated decision-making system to make high-risk, consequential, or critical decisions. Those are decisions that impact a person’s employment, finances, health, personal freedoms, or safety.
An information governance team can work with various stakeholders, such as compliance, IT, legal, marketing, and privacy to create an AI Governance program that helps maximize the benefits of AI while mitigating many of the risks.
What are the key risks associated with poor information governance in AI projects, and how can law firms and law departments mitigate them?
The biggest AI information governance challenge for most law firms and corporate law departments is simply getting started.
The best way to begin is by auditing where and when AI systems are being used within the organization. People interact with AI all the time without necessarily thinking of it as such, especially if they’ve come to associate AI only with platforms like ChatGPT or Microsoft Copilot rather than a broader pool of automated decision-making tools.
Once you’ve identified and inventoried all the AI-enabled software systems in use, the next step is to assess whether any of these tools are used for critical, consequential or high-risk decision-making, such as anything to do with a person’s finances, employment, health or personal freedoms. While using Google Translate or spam filters (both AI-enabled tools) would fall into the low-risk category, for example, an AI-powered resume screening tool would be classified as high risk.
Any tools or functions that fall into the high-risk category require an impact assessment to screen for biases or harms that could result from its use. It’s a common misconception among companies that you can depend on the developer having tested for these sorts of things—you can’t. Because AI tools are operating off your data, you have a responsibility to make sure they are operating legally, ethically and responsibly.
Along those lines, how does information governance for AI intersect with ethical considerations, such as bias, transparency, and accountability?
Bias, transparency and accountability all require documentation—whether it’s through impact assessments, monitoring and auditing systems, obtaining users’ consent, or disclosing the use of AI in the first place. All of this falls within IG’s remit.
From a regulatory perspective, bias is the biggest issue. The bias doesn’t need to be intentional; there are many instances of bias accidentally appearing in an AI system because developers didn’t understand the impact of certain data choices in their model’s training. Take the resume screening example: if you screen for typos, and the AI thinks that two spaces after a period constitutes a typo, it might screen out older applicants since this is a generational difference. Now this tool is discriminating by age. Impact assessments can help root out these biases.
On the accountability front, AI developers and users should not only have a proper understanding of how their tools work but documentation to support it. That’s because all AI systems and results need to be explainable. The black box excuse—where outputs can’t be justified beyond “that’s what the system told us”—won’t be sufficient for regulators or the public. At the same time, the system should always have human oversight and the ability to bypass results when problems arise, especially for high-risk and consequential decisions.
Transparency is also key, particularly for users outside the organization. If you’re using AI in a chatbot, for instance, and user input is stored and used to further train the AI, you may need to notify users and get their consent.
Remember: when it comes to compliance, using AI doesn’t mean that you’re suddenly exempt from regulations governing discrimination, bias, privacy, or other important protections.
On the topic of regulation, how is the EU AI Act affecting AI information governance frameworks in the legal industry?
The EU AI Act is one of many new AI laws. Other countries like Singapore, Canada, Japan, India and South Korea have their own AI regulations, and we are starting to see U.S. states like Colorado and Illinois develop AI laws. But from a governance perspective, the EU AI Act offers one of the clearest frameworks.
The EU AI Act requires developers (“providers”) and users (“deployers”) of AI tools to identify and assess the risks for AI systems. For systems identified as “high-risk”, providers must supply and retain conformity assessments, and both providers and deployers must perform and retain impact assessments, demonstrate certain elements of human oversight, and keep records or logs of use. This is to show the systems are conforming with existing consumer protection, privacy, and product liability laws and frameworks.
What best practices and frameworks do you recommend for establishing robust information governance policies for AI, and how should these practices evolve as AI technology advances?
The creation of an AI governance program within a law firm or corporate law department that includes members of the IG, compliance, IT, legal, and privacy teams, among other stakeholders is essential. That team should lead efforts to understand where and how AI is used within your organization and conduct the impact assessments needed to understand and mitigate risks.
The IG team should also put policies in place governing AI’s use across the organization for both licensed tools and free ones like ChatGPT or Copilot. This may involve, for example, prohibiting employees from inputting sensitive information and trade secrets—including the methodology behind this information—into these tools. Policies around AI vendor contracts are key, too.
Looking ahead, organizations should continue to monitor and audit AI systems to avoid any kind of model drift, which changes how the systems work and can create new vulnerabilities or biases. At the same time, it’s not enough to have policies in place—you need to continually update them as the technology evolves.
This is the second in a series of blog posts detailing the importance of IA before AI. Read the earlier post on dark data here.
- AI
- Information security
- Information governance
- Show all 6