Data. An underwriter’s biggest asset, but also their biggest challenge – especially for those at the Commercial Lines Innovation Europe event last month.
From the speakers on stage to the people we chatted to during the networking breaks, the most consistent theme throughout the two-day event was how to solve the data deluge.
The availability and sheer volume of data has reached a level that traditional systems are struggling to cope with, from submission to the evaluation of that data. It soon became clear that in order to keep up with a rapidly changing risk landscape, underwriters are crying out for the right tools, and skills to turn this data to their advantage.
Data reaches underwriters in an unwieldy number of formats; spreadsheets, email and network drives for example, none of which ‘talk’ to each other leaving underwriters heads’ spinning as they whip between systems (often literally). One of the people we spoke to on day-one echoed this, commenting that the ‘fragmentation of systems’, the volume of raw, unstructured data and double-keying were the biggest challenges she faced on a daily basis. The process is not only clunky, it also leaves enormous room for error for even the most diligent of underwriter.
The immediate challenge facing underwriters is how to make sense of the wall of data they need to make accurate, informed risk decisions, and that wall is only growing. Sean McGovern, Chief Executive Officer, UK & Lloyd’s at Axa XL highlighted the increasing need for businesses to report on sustainability targets, and for underwriters to take sustainability into account when pricing risks. Yet another stream of data that underwriters have previously never had to deal with and yes, another spreadsheet….
Underwriters want systems that calm the submission chaos, enabling them to spend more time focusing on what they do best, writing profitable risks and focusing on business development, or as Chief Underwriting Officer at RSA, Mandy Hunt suggested: ‘data ingestion must be easy to give underwriters more time.’
A need or speed (and accuracy)
As Greg Fleming, CIO of Everest International and our guest for a fireside chat keynote explained, the risk landscape is changing all the time, “risks are changing and the losses are getting larger and larger, systems need to keep changing to match that”. But it’s not just the size and complexity of the risk, it’s the speed of change that threatens to leave underwriters behind if they don’t get access to up-to-the-minute data about the risks on cover. As one panellist put it ‘monthly is simply not good enough anymore,’ systems need to be updated in real-time allowing underwriters to make accurate decisions on what’s on cover now, not what was on cover the previous month, or year.
Many panel sessions referenced inflation across the UK and beyond. Rapid levels of inflation mean that underinsurance is a real problem. Building materials, labour and rising energy prices mean that commercial properties especially are at serious risk of being underinsured if insurers’ only have access to past years’ data. This has a tangible impact on the insurer’s bottom line, so underwriters need confidence that the data they are receiving is completely up to date – as one delegate commented to us, “we simply need accurate data, at our fingertips.”
A new breed of underwriter?
The increasing sophistication of underwriting systems necessitates the upskilling of underwriters and a broadening of the talent pool beyond the skillsets we’ve traditionally deployed.
One final over-arching theme to emerge from the conference was a clear requirement for better data literacy, and for talented people able to harness the many and varied forms of data available. As Hélène Stanway referenced in a recent blog, an estimated 50% of the London market is set to retire within the next 10 years1. There is a pressing and immediate need to start bridging the talent gap. However, rather than necessarily looking for the traditional underwriter – the industry needs to broaden its search to people who can process and make sense of new forms of data. One panellist referred to data literacy as being a “core competency across the value chain”.
At Send, we’ve seen what a struggle it is to for underwriters to wrangle data into shape. It’s one of the reasons we created our AI-powered Underwriting Workbench. This platform calms submission chaos by bringing multiple streams of data, both structured and unstructured into one place for underwriters to review and assess quickly. Essentially enabling underwriters to focus on the ‘art’ of underwriting instead of being mired in spreadsheets and rekeying.
In our panel at the event, Send founder, Ben, and Greg discussed how the Workbench can turn any underwriter bionic, augmenting processes where they will bring the biggest benefit and saving them time and energy on the manual parts of their job.
We really enjoyed the chance to showcase our Underwriting Workbench at this event. Gauging by the responses of the people visiting our stand across the two days, it’s exactly what underwriters are looking for to help them to deal with the data deluge and fall back in love with underwriting once more.