Insurers, get your data right!

We recently had a chat with DQPro Co-founder, Nick Mair, who has a background in consulting, with a focus on technology and change. DQPro is an innovative data monitoring and controls software designed for the specific needs of MGAs, brokers and specialty insurance market. While "big data" has always made news headlines and is part of daily conversation, "small data" remains less popular in the insurance discourse but it's nevertheless important. More about the consequences of underestimating the importance of core operational data, in the interview below. 

 

What is the story behind DQpro? Where did it all start and how did you get to where you are today?  

N.M: We started as a consultancy, Atticus Associates, helping insurers improve their operations through technology and change. Consultancy gives you a great angle on where there are pain points – problems to solve, many of which are present across different businesses. From this perspective, seeing the similar challenges faced by many, saw a huge gap in how specialty insurers manage and trust their most important data, and that became the inspiration for DQPro.

We adopted a classic “Lean startup” approach – pitching five slides describing the idea to five carriers. The response was overwhelmingly positive with two offering to work with us and one even offering some funding. We launched with our first customer, Brit Insurance, which gave us a fantastic reference site and we’ve been growing with new customers ever since. We are now extremely proud to provide a substantial proportion of the global specialty market with data confidence.

What does DQpro do along the insurance value chain and who are your main types of clients in the insurance industry? What is unique about your offer compared to your competitors?

N.M: DQPro is for specialty insurers, MGAs and brokers. It continuously monitors the data that matters most to these organisations for the kind of core data quality issues that can cause huge time and expense down the line if left unchecked and uncorrected. In addition, DQPro monitors 24/7 for breaches in underwriting and compliance protocol. 

Until DQPro, these firms had been using either simple exception report tools built by the IT department or were trying to repurpose heavyweight, overly technical tools – often with mixed results. 

We’re unique in being the first solution to focus on the specialty market and secondly in how we move true ownership and accountability for these issues to the front office - from technical teams to regular, everyday business users, and even to the board room, where quite frankly the issue of data quality belongs. 

DQPro engages everyone in improving data in a simple, intuitive way and perhaps the best proof is that we now have 1000+ users using DQPro daily at specialty (re)insurers in over 10 countries.

What is your opinion about the insurtech wave we have seen over the past five years? Have you seen real transformation of the way of doing business and of the Insurers’ culture?

N.M: Insurtech has been great for the industry as a whole. Yes it has involved lots of hype but also some real progress. I think we’ve yet to see the full stack digital insurers we were promised in the early days because newcomers have realised just how hard it is to be in a regulated, capital intensive business.

In insurance, reputation is built over decades not months. So now the term itself has become much broader and we’ve moved from the notion of outsider disruption to a more partnership focussed approach.

Now insurtech really means anything that is helping carriers improve through new technology. The great outcome has been that insurers are more open to new ideas than ever and in particular can appreciate the potential for operational benefits and ultimately bottom line benefits that technology partnerships offer. Now some of the dedicated innovation teams are being wound down because it has almost become embedded in the way of doing business - it’s a given that if you don’t innovate, you start to lag the market quickly.

Data has always been crucial in the insurance business and more so now with the new insurtech wave. There are lots of opportunities in handling and using big data but there are also many risks involved. Could you tell us a bit about what the most exciting opportunities are, in your view, for the future of data in insurance?

N.M: I’ll call out two areas:

Data augmentation is a huge opportunity. We’re producing more data than ever before and this idea that we can harness it to improve how we assess and price risk or become more efficient is key to the notion of a digital future where everything is data driven. 

But to use the big data effectively you have to get your “small data” right first - I cannot overemphasise what a fundamental and critical issue this is. That means having a base confidence in your core operational data before you attach anything else to it.

Secondly API’s and data standards. We’re moving to a world where there are more industry platforms, but also tens or possibly hundreds of specialist providers offering point solutions to very specific problems in the business process that these platforms support. So API’s offer huge potential for specialists to plug in and offer their services whether that's providing a “trust score” for some quote data or verifying that an overseas tax split is correct. Any solution has to offer multiple ways to integrate.

What about risks involved? How can bad data impact an insurer’s business? Can you give an example of what problem a client had and how you fixed it?

N.M: Bad data is a bit like a river system. When you pollute the headwaters, everything downstream is affected. Incorrect data or just poor underwriting controls at source can cause huge problems, time and expense down further downstream.

Take a specialty insurance policy – we’ve found at least 33 different ways to mess up location – where a risk is based. That has huge implications for downstream functions like reserving, compliance, regulatory reporting and finance. Some examples we’ve encountered include rogue underwriting – where an underwriter has written a risk code outside of the agreed business plan (SBF) submitted to Lloyd’s.

Failing to spot that early led to the collapse of a consortium, a regulatory fine and significant increase in reinsurance premiums, not to mention reputational damage. At the simpler end of the spectrum we found one carrier saved 4.5 man days a year by implementing one daily rule around basic policy dates. It doesn’t sound like much but for a mature implementation of 200+ rules that’s a huge saving of FTE time that could be put to better use.

How do insurtech startups handle the issue of data control? What are the differences between incumbents and new companies? Is being young and agile an advantage when it comes to data quality control implementation?

N.M: Definitely. New startup carriers, brokers and MGA’s are typically starting with a digital, data-first approach. They are not encumbered by legacy and so can start with a fresh mindset and architect their business in a way that works. But in the rush to get started and get money in the door, shortcuts are common and that can create legacy issues quickly if not managed correctly.

Smart technology providers recognise this and so will have a flexible approach to doing business that covers the smallest startups to the largest carriers.

Nick Mair

Co-founder at DQPro