Cyber insurance market update
In the first half of 2024, the London cyber insurance market experienced continued rate reductions — with double-digit decreases for the second consecutive quarter. Close to a quarter (24%) of Marsh clients increased their overall limits, indicating more competition in the primary space, as insurers aim for significant growth targets. Improved carrier competition is providing clients with additional options and expanded risk management services.
Deepfake AI: Risks and exposures
Deepfake content is comprised of videos, pictures, or audio that have been produced with artificial intelligence (AI) to create what appears to be authentic and realistic media. As AI technology has become more widely available, deepfakes are being used for malicious purposes. Hoaxes, phishing, and sophisticated social engineering have long been used by cybercriminals to target both individuals and organisations. The proliferation of AI has increased the sophistication and volume of attacks using AI generated content.
Experts monitoring such threats have noted two significant changes to social engineering tactics due to AI. Firstly, threat actors have been able to churn out higher volumes of convincingly written, tailored phishing emails. Furthermore, tactics for manipulating people have become increasingly difficult to detect and protect against with the ease of access to deepfake AI — either video, audio, or images.
Threat environment
The London cyber insurance market is experiencing significant developments and changes. There is growing focus on how organisations manage major cyber events and their risk management processes, including against evolving, malicious deepfake AI activity.
Cyber events and the threat of deepfake AI attacks continue to be relevant to law firms given their:
Management of large volumes of sensitive, confidential, and valuable data
Management of extensive commercially sensitive information and processes
Management of client funds
Increased dependency on computer systems to transact with clients, business partners, and financial institutions
The threat environment remains consistent with trends from 2023, with ransomware and supply chain attacks also increasing, especially for exposures in the US.
Examples of deepfake AI attacks
In the professional services/corporate sphere, we have seen notable examples of individuals being manipulated via deepfake AI video calls, whereby people believe they are speaking with senior leadership. In one widely publicised case, an employee was duped into sending £20 million to criminals in a deepfake scam.
More recently, UK Foreign Secretary David Cameron was targeted in a similar ‘hoax’ call. Lord Cameron believed he was speaking with the former president of Ukraine, Petro Poroshenko — an example that further emphasises the sophistication of such tactics. After investigating, the UK Foreign Office confirmed both the video call and messages were fake. The UK Government decided to publicly confirm this detail in order to raise awareness about the threat of deepfakes.
While neither case is a specific example of a ransomware attack, both highlight the ease with which such tactics might be leveraged to enable a ransomware extortion scenario. Ransomware scenarios conventionally involve threat actors encrypting a firm’s system(s) and simultaneously extracting vast amounts of data to maximise a firm’s incentive to pay. For the legal sector, concerns centre on how deepfake AI voice and video calls could manipulate employees to provide direct access to systems — allowing ransomware groups to gain access to client and employee sensitive data. The legal sector could become one of the main targets considering the confidential nature of data held, which may lead to greater ransom demands. Attacks can interrupt your business and lead to considerable reputational damage.
Responding to deepfake AI
Deepfake AI has the potential to be a potent threat vector and, as AI technology evolves, more malicious actors may use deepfakes to attack organisations.
Threat actors could utilise deepfake AI to pose as a law firm CISO or an IT director. Deepfake AI may email members of the firm’s cyber security/IT team posing as these individuals and request access to particularly sensitive data, case file holdings, or include malicious links or attachments in an email. These remain classic tried and tested social engineering techniques — making a phishing attack much more likely to be successful. Potentially, threat actors could deploy ransomware codes, encrypt backups, and thus extort ransom payments.
Losses emanating from hostile utilisation of AI for deepfake attacks are likely insurable, either through cyber liability or commercial crime policies. However, it is important to have a robust risk management framework in place that accounts for deepfake risks.
Law firms should take a strategic approach to build resilience and be better positioned to identify and manage deepfake risks. An integral part of this process is training employees and raising awareness about deepfakes and its potential impacts on the organisation. Firms should also consider developing policies, guidelines, and processes to ensure there are second or tertiary means to verify certain requests. For example, at the start of a contract, establishing pre-agreed contact details with vendors for payments and requiring dual signatories for payments above a predetermined threshold.
Of equal importance is continuously strengthening a firm’s resilience through the adoption of security controls. Law firms should consider the utilisation of AI detection tools — such as Deepware or Microsoft Video AI Authenticator — that can assist in detecting deepfake content, such as face swaps, AI generated imagery, and manipulated audio. Furthermore, robust email security should also be implemented, including:
Domain-based message authentication, reporting, and conformance (DMARC)
Sender policy framework (SPF)
Domain keys identified mail (DKIM)
These measures can help prevent spoofing and phishing attacks, verify email IP addresses, and validate email authenticity.
Law firms are not immune to the various evolving cyber threats — particularly deepfake AI social engineering methods. Firms must prioritise cybersecurity to safeguard sensitive client and employee information, maintain trust among clients and stakeholders, and to mitigate financial and reputational impacts associated with cyberattacks. It is crucial for law firms to maintain robust, consistent, and widespread cybersecurity measures. This can include employee training, network security protocols, data and back up encryptions, and regular vulnerability assessments, to mitigate against the increasingly sophisticated deepfake AI environment.
Author: Samuel Scott, Cyber, Media & Technology Practice, Marsh Specialty.
Nam Qureshi, Vice President, UK FINPRO, Marsh
MARSH
3rd Floor, 45 Church Street, Birmingham, B3 2RT
T: +44 (0) 121 626 7909
M: +44 (0) 7825 100 997