Interviews » Move Quickly to AI, but Be Smart and Manage Your Risk

Move Quickly to AI, but Be Smart and Manage Your Risk

Interview with Nick Vandivere

August 30, 2023

Nick Vandivere of Thomson Reuters

Nick Vandivere is Vice President, Product, at Thomson Reuters, and leads the product and go-to-market strategy for Document Intelligence, the latest Artificial Intelligence solution in the Thomson Reuters Legal Technology portfolio. Prior to joining Thomson Reuters, he served as CEO of ThoughtTrace, where he oversaw the company’s transformation into a leader in the application of applied artificial intelligence and machine learning.

Today’s General Counsel recently interviewed Nick Vandivere, Vice President, Product, at Thomson Reuters.

Let’s start with your take on how the market and technology have evolved since you first started working in the legal industry.

So, if we zoom out a bit and take a broader look, I’ve noticed two major changes in the market in the last 5-10 years. Back in 2016 and 2017, when we created the Document Intelligence platform, it was quite a challenge to convince people that our AI technology could actually read contracts and assist them in their jobs. Some folks were downright skeptical, and some even thought it was all just smoke and mirrors. We had to work hard to show them that, yes, this was absolutely possible, and no, it wasn’t going to replace them.

Functional AI, going forward, is going to be very much with humans in the loop. I think the market has come a long way on that. Now, when we meet with clients (and I’m guessing this holds true for other vendors too), the focus has shifted. It’s no longer about proving that AI is a real thing that works. That battle has been won. The question now is whether our specific product, our AI, can meet their needs effectively.

That’s a big shift and I think it’s one that portends very, very good things for legal tech going forward. We’re past the phase of mass skepticism of the general market and we’re into the phase of, “How can this product actually meet my needs?” Which is a big change.

What are some of the mistakes you see organizations making when it comes to AI?

I think the biggest mistake someone can make is to blindly commit to a specific technology without fully understanding if it will truly solve their problems. I’m not suggesting that everyone should spend six months testing out a technology before making a decision. That’s not realistic for either side, to be honest. However, I do believe that everyone needs to focus their AI implementations — or any innovation goal for that matter — on solving what needs to be fixed or improved. So begin with that end in mind, not the technology itself, despite the hype or latest trends.

Once that is clear, and you find a partner you trust with a solution that works for you, focus on achieving real success, not just in terms of what the AI can do, but also in terms of people actually using and embracing it.

Another common mistake I have seen is when companies expect their legal teams to train the AI models to fit their organizational requirements themselves. You see, AI requires a substantial amount of data, and that large volume of data also must be diverse to build scalable and accurate models. Often, when companies try to train their own models, after six months, they find themselves stuck with minimal progress and a long way from seeing any return on investment or value. Most legal teams aren’t experts in training AI models, so my advice would be to partner with someone you trust, with a history rooted in innovation, and who specializes in that area.

So how have the recent generative AI developments that have taken 2023 by storm impacted your customers?

I think the really cool thing is the immense buzz in conversation about the impact of AI in the short and long term. It’s really raising awareness and urgency among organizations to figure out AI. ChatGPT has played a huge role in that. It’s had a much bigger impact than anything else I can think of, except maybe when IBM Watson won Jeopardy. That created a lot of buzz, but it was less practical compared to Large Language Models (LLM) like ChatGPT. With ChatGPT, I can actually ask it to generate an assignment provision for me as a legal professional, and it’ll give me something that sounds good. The only issue is that it may not always be accurate or aligned with the company’s drafting playbook or risk tolerance, presenting a challenge that can be addressed by integrating LLM-enabled technology with expert legal content and oversight.

However, this buzz has led to two things: organizations, like law firms and corporations, that are early adopters of technology are moving faster to gain an advantage. It’s a smart move. Right now, we can see that they’re acting with a greater sense of urgency. On the other hand, there are those who prefer a wait-and-see approach in the broader market. They also feel the urgency, but they want to see what actually works and who the winners will be before they make their decisions. Moving quickly has its advantages for organizations if done well, although it carries some risks. But there’s definitely risk and loss in moving slowly as well. People tend to overlook the downsides of waiting. So, the key thing to me is to move quickly but in a smart way where you manage your risk.

I know data security is important when it comes to using new technologies in relation to sensitive corporate and legal data. What are your thoughts on the best approach to this?

This is a big question that needs addressing, and technology providers should have clear answers about what their applications can and cannot do, as well as where potential risks lie. And here’s the thing, risks will be present to some extent in everything. It’s important for those risks to be openly acknowledged. Take generative AI, for example. It involves inputting information and receiving a response that’s not pre-programmed but finely tuned. That, in itself, poses risks in terms of how people actually use that data. So, the number one priority should be transparency, not just presenting it as a fancy wrapper for widely available technology. There’s a lot to figure out here, but as vendors, we have a responsibility to mitigate a significant portion of those risks. And when risks do exist, we must be completely transparent about them.

It’s taken time for the legal market to become comfortable with supervised learning technologies that help with specific tasks like contract review, and then generative AI arrives and creates uncertainty. Have these recent developments complicated this general view of AI in the legal community? Are customers able to compartmentalize those various offerings and understand where AI can work for them?

Many customers haven’t quite figured out how to handle this properly yet. So, we really need to focus on educating the market. But if you want to make the most of generative AI, you still have to put in a ton of effort in preparing the data and doing it right. It’s no different than supervised AI that excels in document classification, provision extraction, and recognizing similar topics through clustering: you still have to figure out where a certain document is stored, which system to look in, find the relevant version, and understand the meaning behind certain clauses you’re filtering. The same type of preparation is fundamental to getting the best results from generative AI as well.

And this is assuming that for law firms or corporations, their existing content repositories, such as the contracts they’ve already signed, will still hold value in the future. I believe they will, and I don’t think that’s a controversial opinion. If they are valuable, then the organization’s ability to accurately categorize the documents themselves and understand their semantic and contextual meaning becomes crucial. The better you are at that, the more you’ll ultimately benefit from generative AI, regardless of the solution you choose. So, the AI tools that are currently being used, and used well by some, should be more widely adopted because they are a necessary foundation for making the most of generative AI in any organization moving forward.

if you’d like to continue the discussion, please reach out to nick.vandivere@thomsonreuters.com.

Want more articles like this?

Sign up for a complimentary subscription to Today's General Counsel digital magazine.

 

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top