Electronic placement: Driving the data agenda

Electronic placement: Driving the data agenda

Elnur Amikishiyev/Elnur – Fotolia

Electronic Risk Placement (EP) has been a controversial subject since the idea first emerged in the London Market in the 1980s. Considered, rightly, as never being able to usurp the market’s key strength of face-to-face collaboration between expert practitioners, whose relationships and skill make the specialty market what it is, EP’s proponents have only ever considered it a support mechanism.

To its detractors, however, EP has long been feared as the cause of full automation, computable contracts and ultimately, with AI, the demise of the insurance market practitioner.

The truth, of course, lies somewhere in between. Although EP could potentially enable automation of smaller, simpler risks, declarations and less impactful endorsements, the larger, more complex risks will never lend themselves well to it. Component parts of complex submissions could be addressed more efficiently if their data was conveyed in a standardized electronic manner, however, for the vast proportion of placements undertaken electronically in the market even now, automated underwriting is a bit of a pipedream, and the challenge to make it happen would be immense.

Repeated criticisms have been levelled at London for not adopting true, document-free electronic trading, usually by those from a general insurance or securities trading background who nailed it decades ago. The point that those specialists miss is that commercial and specialty insurance risks cannot be easily expressed in convenient, small packages of standardized data elements to trade like their transactions can. Our data set encompasses a huge, complex, and constantly evolving range of insurable things, people and processes, and its deconstruction into practical, implementable data standards has proven near impossible so far.

And London is but one market for the global brokers, justly reluctant to invest huge sums adopting standards and processes solely for London’s benefit.

Consequently, EP in the London Market, which has been live since 2016 and genuinely successful since 2020, sought only in the first instance to reduce paper-based manual inefficiencies, streamline arcane and diverse placement processes and provide a central audit record. Even that was hard enough. Using an approach known as “document-first”, electronic documents are combined with a small amount of supporting data to aid workflow and glue it all together. If you want more data, you have to “scrape” it out of the documents.

See also  Builder’s Risk Insurance Explained: Coverage, Costs, and Benefits

This approach was proven successful beyond doubt when the pandemic triggered a rapid re-think across the market, seeing electronic trading volume rapidly accelerate in most risk classes to a very high proportion of overall market trading (over 90%) and remaining there ever since.

So, it works. Brokers use a standardized risk placement method for all submissions in a single repository and can “remotely broke” simpler ones which would otherwise be inefficiently broked by hand. Underwriters benefit from a simple portal through which they assess, compare, negotiate and bind contracts, downloading the final versions into their administration systems — all from the box, the office or working from home.

Further, transformative benefits are available with integration. Market participants integrated to a placing platform with APIs (Application Programming Interfaces) are freed from the need to manually transpose submissions to/from their systems, obviating re-keying efforts, and the risk of introducing errors and omissions.

But the market’s appetite for real data, which is currently embedded in those documents or not even yet available at all, is growing fast. The quest for it, and the ultimate removal of those documents from the process, is an approach known as “data-first”. It’s a broad term that starts by considering only the slip contract as a target for replacement and ends, potentially, with the whole shebang being standardized and digitized — the slip, schedules and all — and it’s being driven by a number of ambitions.

Firstly, a relatively easy one. A byproduct of EP is that underwriters can receive many more broker submissions than they used to, so they need to quickly triage and prioritize them, or their service levels can deteriorate. To help, their systems must know more about each submission without trying to interpret the data held in its documents. Proper data to define the risk’s essential qualities and quantities can be used quickly and efficiently by the underwriter’s systems, and any additional broker effort to supply that data from their own systems will benefit them with more rapid quote turnaround. And slow turnaround time is reportedly a growing market problem.

Secondly, underwriters are increasingly using systems to quantify, analyze and quote risks before ultimately storing, aggregating and administering them, so their systems need to be fed accurate data from the slip and the detailed schedules of values (SOVs). Getting all that out of a PDF contract and numerous randomly designed spreadsheets containing inconsistent, ambiguous or erroneous data is hugely inefficient and expensive for the whole market.

See also  Mercedes-Benz CLE Cabriolet previews a sun-filled summer in Europe

The MRC v3 goes some way towards addressing the slip part of this problem, in that it conveys data as well as text, but that still has to be reliably populated by the broker. And, as it’s still a document, it’s certainly not ideal. But the schedules and other supporting information are extraordinarily complex problems to solve, requiring the standardization of vast numbers of insured “things” and the mandatory use of these standards throughout the distribution chain and even possibly the customers themselves. Achieving that is no small feat, challenging our brightest minds both technologically and commercially. One thing is certain: the specialty London market relishes a challenge, and while we are some way off this monumental chunk of the data puzzle being solved across the market, our hunch is that AI might play a role in the solution.

Finally, as part of the ongoing Blueprint Two program, a new central accounting and settlement bureau system is being developed. By Phase 2, in April 2025, it will no longer accept submission of the PDF documents that are manually “scraped” by the bureau to gather the transaction, risk and market data. Instead, participants will submit the Core Data Record (CDR), a collection of data that will enable the entire post-bind process to be conducted automatically by Velonetic (formerly XIS, who run the bureau).

It is now generally understood that the job of collecting that CDR data and transmitting it to Velonetic will best be done by the EP platforms themselves. This makes sense topologically but there’s considerable debate on how to implement it. For one thing, the CDR is not populated entirely from broker input — it comes from underwriter input too, and others behind the scenes, making timing, workflow and, critically, validation of those inputs complex and onerous for all parties. And all of that before a risk can be bound.

See also  You're Probably Sitting In Your Car Wrong

On the plus side, there is now strong competition in the EP market, which is absolutely essential to drive improvements in functionality, integration and quality of service, all of which suffer immeasurably in a monopoly environment. Although PPL was once that monopoly provider, first Whitespace and then Ebix Europe have provided competing platforms, both gaining considerable traction alongside a number of private broker and underwriter platforms. This makes it essential that Blueprint Two is crystal clear on how it wants the CDR to be collated and managed before big investments get made by multiple technology developers.

The key here is integration. Although document-first is mostly conducted today without it, the CDR (let alone full-blown data-first) cannot realistically be. So every EP platform, and broking and underwriting administration system will need to have APIs operating to some degree of market standards to allow them to be joined together, then the data can flow before April 2025.

The coming year will be pivotal for the market’s modernization. The successful implementation of document-first EP was just the start, laying foundations for the future.

And to capitalize on that foundational success and push for CDR, for more and better data and for APIs to integrate it with, requires true collaborative engagement. That engagement must be between the Blueprint Two/Velonetic teams and the market’s many software vendors, not just the market participants. And we’ve not seen nearly enough of that yet at a detail level, where the Devil lives.

Successful collaboration will pave the way for the legion efficiencies that data brings. Failure, conversely, will risk another London Market modernization misstep to add to our long and costly history. Quite simply, we are at a stage on the market’s EP journey in which failure is not an option. Data, integration, and collaboration are the driving forces of innovation now, and long may that continue.