While personalisation is the new-age business mantra, data plays an essential role in aiding organizations deliver superior customer experience and value. Though data is available widely and in abundance, what matters more than quantity is its quality.
One recent survey identified that 84 per cent of CEOs globally are concerned about the quality of the data-based on which they undertake decisions. There is a financial impact of poor data, which makes it more difficult to understand exactly who your customers are, where they are, and their likes and dislikes.
In contrast to some examples of digital leaders in online retail or travel accommodation who use predictive analytics effectively, poor data quality practices can undermine digital initiatives, weaken competitive standing and sow customer distrust.
For a customer-centric sector like insurance, quality of data assumes even greater importance in areas of operational efficiency, creating a single customer view, bringing down fraud incidence and building trust.
Data challenges for Indian insurers germinate from the old methods of underwriting where one fills up an application form, and depending upon the standard age-based pricing (using mortality table with support from medical and financial history, as declared in the form), coverage is provided.
However, the need of the hour is to have a granular analysis of the data collected, to establish its quality and have a central data repository to move to the next level. As of now, the existing data, with its errors and chances of duplication, is available in different formats and systems. All of these play a part in reducing data quality, leading to flawed communication and misleading information, thus harming the interest of both parties, the insured and insurer.
Findings from the latest EY Global Insurance Trends report show that new data-driven pricing models such as Pay-as-you-Live, Pay-as-you-Drive represent some of the biggest challenges to growth and profitability. Integrated data ecosystems and platforms have potential to create new risk pools and monetisation opportunities.
Increasing digital interaction with consumers has resulted in the generation and capture of a large amount of data, including new types of data. These vast and complex reserves of data, if carefully analysed, can be used to extract business intelligence that can help insurers in customer acquisition, customer retention and risk management.
Analysis of this vast data set is possible through data analytics, which can not only handle large and complex data sets, but also a variety of data coming from all sources. This analysis would help insurers to:
Reduce incidence of fraud and bring down wastage: A bane for insurers, the sector loses out on huge sums of money, as well as customer trust, due to fraud and delays in claims handling. Currently, a lot of time, effort and resources go into measures to detect fraud. However, quality data and analytics, would arm teams to identify such frauds with greater accuracy and in a more efficient manner.
Analyze risks better: Meaningful analysis of data would help insurers consolidate their underwriting process, for better risk selection based on proven risk attributes (across the broad spectrum of health not just the individual insurer data), and reducing instances of collecting information via intrusive ways of data collection.
Improve turn-around-time (TAT): With the availability of accurate data, underwriting process would be faster, quicker and nearer to being frictionless with fewer human touchpoints. This in turn would improve the turnaround time, enhancing customer satisfaction and engagement.
All of these would help lower overhead costs for insurers, using data to stream new customers or claims into the groups that are most demanding of a human touchpoint, by the underwriter or claims handler. This has the effect of making products more affordable for the end-users.
While big data analytics can help generate meaningful insights from vast, complex sets of data, what is essential is to take consistent efforts to maintain the quality of this data and prevent it from decaying. Thus, for insurers, it is pertinent to:
Data cleansing primarily refers to standardizing or normalizing data, dropping duplicates and merging multiple datasets to make them ready for processing. A formal update on a regular basis would help insurers keep their database relevant and upgraded, thereby preventing its decay.
It is equally essential to enhance the quality of existing data, to create a more robust customer profile. At the same time, it is important to make sure any incomplete or missing data (usually introduced by human error) is filled in, at the earliest opportunity. Moreover, it is very important to obtain data from a trusted and credible information provider in order to weed out chances of errors, duplication and manipulation.
Besides sanitising the data and improving its quality, managing the data for the right purpose is an important element of the equation here. Instead of doing these tasks themselves, insurers can consider using the services of a data aggregation company that specialises in creating and managing a system that can detect changes, if any, and automatically provide updates to individual records, including name, address, contact numbers, and so on. Such a system would also proactively flag any suppression of records, that can otherwise lead to potential frauds.
Finally, it is crucial to link data across the insurance ecosystem and across other health or medical data sources, public records, for a more comprehensive profiling of customers, the so-called single customer view. By linking data from multiple sources, insurers can eliminate the human-subjectivity factor in assessing risks, understand its customers, and predict fraud.
The author is the Managing Director, India- LexisNexis Risk Solutions