Sedgwick’s chief digital officer on road to ‘tactful’ gen AI use

Sedgwick's chief digital officer on road to 'tactful' generative AI use

Sedgwick’s chief digital officer on road to ‘tactful’ gen AI use | Insurance Business Asia

Technology

Sedgwick’s chief digital officer on road to ‘tactful’ gen AI use

What’s the best approach to take with the headline-grabbing tech?

Technology

By
Gia Snape and Mia Wallace



In the last two years, better algorithms, larger models, and more expansive datasets have vastly accelerated the output of generative artificial intelligence (AI) – and the insurance industry has taken notice.

Many insurance organisations have said they are experimenting with various applications of ChatGPT and other large language models in a responsible way.

For Leah Cooper (pictured), Sedgwick’s newly appointed global chief digital officer, the real challenge in weaving generative AI into the insurance ecosystem was taking a tactful approach.

“In the last year, generative AI has been the talk of everything. How do you use it responsibly, but [also] how do you also use it tactfully?” Cooper said.

“I think the tactful part has been one of the biggest challenges for companies who want to harness the power of gen AI but don’t know how to apply it to what they do. One of the fun things that I get to do is figure out how to make that relevant to us.”

Cooper’s role is newly established at Sedgwick, a global claims solution provider. As chief digital officer, Cooper is tasked with spearheading Sedgwick’s initiatives in technology research and development and advancing its digital roadmap and transformation strategy.

Sedgwick’s approach to generative AI

Cooper spoke to Insurance Business about seizing low-hanging fruit during the early days of experimentation with generative AI.

See also  WSIA announces new leadership roster

“We started very quietly, just researching, ‘is this going to be applicable? Anything that we can do?’ The good thing is the answer’s yes,” Cooper said.

“In our day to day, when we manage claims, a big chunk of that is evaluating the medical documents that come in support of claims. Sedgwick receives over 1.7 million pages of documents a day. We have thousands of people whose job is to sit and read these documents and then write a summary of them.

“So, maybe we don’t necessarily automate everything, but [what if] we automate parts of the process that they experience daily, take away the red tape and the administrative burden, expose the relevant information so that [the time it takes them to] learn about a claim goes from 15 minutes to two minutes, and be able to make a decision? Then, not only have we cut down the queue, but we’ve also improved the claimant’s experience because things speed up.”

In April this year, Sedgwick set out to create a prototype for a generative AI model to run within the organisation’s ecosystem that could reliably and privately use data to speed up document processing.

It introduced Sidekick, a Microsoft OpenAI GPT-4 integration designed to improve workflow for Sedgwick’s claims team.

The platform, Sedgwick’s first use case of GPT technology, aimed to improve claims documentation speed and accuracy and automate routine tasks.

“We had to make sure that we had a partner to help us create that environment so that our data stays very private. At the same time, we needed to be able to install tools within our environment and make them work internally,” said Cooper.

See also  Inszone swoops for California firm

“The second thing that we had to do was make sure there was a very viable use case internally, and the low-hanging fruit was document summarisation.”

The biggest challenges in deploying generative AI

Starting off Sedgwick’s journey on “low-hanging fruit” presented by generative AI allowed the organisation to dip its toes into the technology while considering broader use cases.

But while research and development were the first stages of the process, the next stage – operationalisation – was the biggest challenge, according to Cooper.

“How do you introduce new technology in a way that blends very seamlessly with a process that’s well-established and compliant?” she asked.

The other challenge was avoiding bias. To combat this, Sedgwick incorporated a feedback system where claims examiners could provide oversight on the AI tool.

“We basically gave [examiners] a survey to say, ‘Did this get it right? Did you have any concerns of bias from the results of the sets?’” Cooper said.

“We have humans reviewing the summarisation to ensure the technology works appropriately and within our AI governance policies.”

What’s next for Sedgwick’s digital strategy?

The launch of Sedgwick’s first use case of generative AI earlier this year doesn’t mean the process has ended. Cooper emphasised the iterative nature of model development.

“It’s a very iterative process, because we will get feedback like ‘It would be nice if it had also included effects’, and then we turn around and tweak the prompt engineering,” she said.

“As we move forward with additional models, every single model will need to go through this process, where our users gain confidence that it’s working correctly and, therefore, will adopt it.

See also  Lloyd’s forecast to see best underwriting result since 2014: ICMR

“The worst possible thing we can have is a lack of faith that the tool is working correctly. Otherwise, it won’t save any time or improve the experience.”

What are your thoughts on Sedgwick’s approach to generative AI? Share them in the comments.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!