Generative AI usage in law firms: Control and governance
Generative artificial intelligence (AI) has gained significant media attention due to its potential benefits and opportunities for various industries. However, unless monitored and managed carefully, use of generative AI also presents significant hazards. It is important for law firms to take action to understand how it is being used within their organisation, and how to identify control and mitigate potential risk events.
There have been instances of professional service firms, including law firms, being encouraged to use open-source, public generative AI tools for legal work. Potentially, these tools may be accessed by practitioners of all levels, and it is critical businesses ensure they are aware of the potential risk events that may occur, including:
· Breach of confidentiality
· Infringement of data rights
· Intellectual property infringement
· Cyber-crime from malicious actors hacking user accounts
· Hallucinations and errors (plausible sounding answers, but incorrect results)
· Limited or outdated training data in in the tool, with the result that outputs are unreliable
· Generative AI not being designed or trained for legal work
It’s important to note that there could be ethical and transparency risks associated with clients consenting to the use of generative AI tools on their cases, and how such tools will use and hold their personal data. There have already been several instances of clients using AI detection tools to help challenge lawyers about employing generative AI without clients’ prior knowledge and approval. Law firms should also consider whether cost savings resulting from the use of generative AI are being passed on to clients.
If not in place already, it is strongly recommended that clear work instructions are now issued by managers and supervisors and acknowledged by all colleagues. Records should be kept of initial and updated guidance in order to protect law firms, as evidence of this may be potentially required in the future. Similarly to software patching, in order to be effective, it is crucial that this risk is understood and behaviour is controlled consistently across the whole firm. Additionally, all colleagues need to be aware that the introduction of AI could create cyber risks that cyber criminals may potentially exploit.
Unless specifically approved in advance and in writing by senior management, we currently consider prohibition of the use of any open-source generative AI programme for work-related purposes — on work-related devices or transmitted to work-related devices — as the best form of risk management. In order for this to work sensibly, it is implicit that senior management are themselves sufficiently informed to make judgements on such approvals or refusals. The Solicitors Regulation Authority (SRA) requires that firms ensure “governance frameworks remain fit-for-purpose and underpin the responsible adoption, use, and monitoring of AI.”
If intended, it should be made clear at the time of the prohibition that breach of this requirement could result in significant repercussions, for both law firms and employees.
If firms choose to permit the usage of generative AI tools, then it is essential to establish clear guidelines regarding:
· The scope of permitted use
Clearly define how, why, and for what purpose generative AI may be used.
· Authorised and prohibited tools
Specify which particular generative AI tools are authorised and which are prohibited.
· Authorised user groups
Clearly identify who within the firm is authorised to use generative AI tools.
· Output monitoring
Develop short- and long-term strategies to provide assurance on the accuracy and reliability of results generated.
· Record keeping
Establish protocols for documenting when generative AI tools are used and how the output is verified.
Firms will also need to consider how they inform their clients about the use of generative AI tools — both for new instructions and ongoing matters. The SRA recommends that firms inform clients when AI will be used for their case, and how it will operate.
Some underwriters may ask questions about usage and adoption of generative AI and whether or not the organisation is exploring a firm-wide system. Further questions anticipated are:
· How are you currently controlling usage and minimising risks from open-source public usage tools?
· Have you issued a policy on the use of generative AI and if so, how are you monitoring adherence to your policy?
· Do you use any tools to check and monitor unauthorised use of generative AI and if so, what have you found?
Even if your insurer does not ask questions about use and control of generative AI during the next renewal, other stakeholders, such as clients, regulators, or colleagues, may start enquiring about your use of generative AI. Nonetheless, we expect all of those stakeholders will be looking for clear answers on these points very soon. It is important for law firms to proactively address these concerns and establish robust risk management practices to ensure the responsible and secure use of generative AI tools.
Authors