WTW's pricing expert talks about AI impact

WTW's pricing expert talks about AI impact

Duncan Anderson, global technology leader for insurance consulting and technology, Willis Towers Watson.

Pricing consists of four different areas: analysis, decisioning, deployment and monitoring. Technology has changed each of those in different ways. With insurance pricing, analysis is understanding the risk with the likely cost of claims. It’s also about understanding policyholder behavior on personal lines. Today, insurers worldwide have at their disposal a very rich, powerful toolkit of machine learning models that can very quickly, very easily produce highly predictive models. 

With that powerful prediction comes issues with interpretability, because a lot of these models are quite hard to understand. That can be a big problem in insurance for two reasons. Firstly, unlike marketing or other functions in insurance or other industries, where it’s okay to have an 80-20 model, if you misprice insurance business, you can lose a lot of money very quickly. When things change, as they did during the COVID pandemic, some insurers are sharp at noticing that. Others are slow. Some relied on clever machine learning models calibrated pre-COVID that weren’t as good post-COVID.

There’s a wall of regulatory issues to be tackled. There’s 50 flavors of U.S. regulation. There’s now quite a bit of pricing regulation to adhere to. For that, not only understanding what your models are doing, but also being able to explain them is really important. 

There’s been less change in deployment from a technological perspective. But given all this modeling, it’s important to scenario test what you want to do, and what’s the best thing for the business in underwriting, pricing or other actions in portfolio management. It’s important to construct a calculation that predicts as accurately as possible what might happen. 

See also  Biz Groups: Big Tax Law Changes Needed To Help State Compete

WTW developed proprietary machine learning models that are interpretable by design. We have patents pending on interpretable machine learning models that fit just as predictive models, but are transparent in that you can see which factors explain the risk and the behavior, explain very clearly what’s going on, are much more transparent and manage the models much better as a result. 

Once you decide what to do with pricing structures, claims, underwriting rules and case triaging rules, that has to be deployed into the real world, into a policy administration system. Increasingly now, technology has enabled very complex things to be done at the point of the sale. But perhaps most importantly, maybe helped by the adoption of cloud computing, many systems out there are much more interoperable, and play more nicely with APIs, allowing calls from one system to another, and a componentized approach. That enables the deployment of deep analytics, undiluted by errors, without the costly process, so you can now deploy much more quickly. You can respond very quickly to developments in the market.

By actually analyzing whether it matters, we can identify proactively, if something needs attention, and if it does need attention, automatically to identify perhaps why. So that allows managing models to happen more easily, but also just wider portfolio management. 

This automation removes the dross from an expert’s life, and enables them to do what they’re good at, which is thinking and bringing insurance experience to bear. None of this is about replacing the expert. It’s about empowering the expert. The insurers that will win are those that embrace technology developments and analytical tools, and use them effectively, understanding insurance and keeping their eye on the ball. Problems from not spotting inflation or models gone wrong come from inexperience in insurance management, overreliance on models and approaches that were not fit for purpose. Empowering the experts so they can be experts, is a big theme that we mustn’t forget.