By Randy Myers
The stable value industry is built on a foundation of contracts: guaranteed investment contracts, contract value wrap contracts, and group annuity general and separate account contracts. For anyone employed in the industry, those contracts make for a lot of reading. They also make a lot of work for lawyers. But soon, drafting, reading, and interpreting stable value contracts could become much faster and easier thanks to advances in artificial intelligence.
AI has been around for decades in one form or another, but the latest iteration of the technology—large language models—is the one that has technologists and business leaders alike salivating over its potential to revolutionize the way much work gets done. Large language models, or LLMs (also known as foundational models), are an advanced form of deep learning, which in turn are an advanced form of machine learning. LLM models are self-trained on what Nick Vandivere of information conglomerate Thomson Reuters describes as “unconceivably large” amounts of data. Like traditional deep learning models, he says, they can uncover features and patterns in data that often are not observable through human inspection alone.
Perhaps the most well-known LLM application today is ChatGPT, a chatbot released in November 2022. ChatGPT can generate content—articles, reports, critical analyses, letters, emails, even poems—based on simple, natural-language prompts from users. More such products are on the way—products, Vandivere told participants at the 2023 SVIA Fall Forum, “that ultimately are likely to benefit your industry.”
He pointed, by way of example, to Microsoft CoPilot, an AI application that Microsoft recently integrated into the enterprise version of its office productivity software, Microsoft 365. Users can ask CoPilot to draft and edit documents in Word, summarize long email threads in Outlook, or identify insights from data in Excel—to cite just a few examples.
“At some point next year, you will probably all start using Microsoft CoPilot,” Vandivere told his audience. “It will likely change the way you work with a lot of the content right there on your desktop.”
Another AI product the stable value community may benefit from using is one that’s being offered by Thomson Reuters. It’s called Document Intelligence, and Vandivere leads product and go-to-market strategy for it. Document Intelligence can read, interpret, and draft contracts. It can quickly review long contracts to identify critical contract provisions, pinpoint risks, and suggest areas for proactively managing risks and obligations. Applied at scale to a portfolio of thousands of contracts, it can complete in minutes work that might take a team of people days, weeks or even months to finish.
Loyrn Limoges, an attorney who serves as a subject matter expert at Thomson Reuters, stressed that Document Intelligence operates at a level far more sophisticated than a “Control F” search for words or phrases in Microsoft Word. By way of example, she described trying to find any reference to force majeure provisions in a batch of contracts, even when they aren’t labeled as such.
“We train the AI to look for the intent of the language,” she explained. “It’s going to find that language even if the common provision name does not exist.”
Unlike ChatGPT, which has been trained on the vast amount of information available on the Internet, Document Intelligence is trained on Thomson Reuter’s own data, including content produced by its large internal network of lawyers. It also can be trained on a client’s data, Vandivere noted, with provisions to protect the privacy of that data. Controlling the content on which the AI is trained, Vandivere said, should help to avoid the “hallucinations” sometimes generated by LLMs that have been trained on the internet. Hallucinations are output that can seem entirely plausible even when they are inaccurate.
Limoges stressed that anyone worried about AI’s impact on their own job should view the technology as a tool that will support them, not replace them.
“Don’t think about AI in this sense as another professional staring over your shoulder,” she said. “It’s really … a task assistant. It should make you faster and … (let you) rest easier.”
Vandivere concurred. He said that while large language models are good at writing plausible and coherent text, and synthesizing knowledge of latent content into new content, they are not good, at least right now, at grounding output in facts and truth; controlling for bias, harm, or abuse; or conducting abstract reasoning. Accordingly, he said, there’s still a need for humans to review AI’s work and bring to bear on it all their accumulated experience and expertise.