DNF Editorial Staff

Industries such as Banking, Financial Services and Insurance typically capture hundreds of fields of information from customers and prospects in order to process applications, claims, open accounts, etc, as well as for marketing opportunities. However, much of the data is of low quality or unreliable. Customers and prospects enter junk data in various fields to cut short the process.

Further, data is captured in multiple formats and from multiple sources. There are claim applications that originate from legacy systems; IT systems used by other departments, homegrown SIU (special investigative unit) case management systems, third-party vendor systems that feed medical bills, etc.

In addition to the incompatibility between these disparate systems, useful or insightful information is captured in long strings of data, which makes it difficult to extricate the relevant text.

Thankfully, research in this area has helped evolve some techniques for detecting fraudulent data

  • Better integration between data sources. For effective Fraud Analytics, data from multiple sources must be integrated into one cohesive system which makes it easy to extract and compare data. Insurance companies typically handle claims, policy information, bills & invoices, medical reports and clinical information from multiple data points. The first step then is to create an interface that will integrate all the data and make them compatible with each other. Only then can your Fraud Analysts detect erroneous and redundant information.
  • Create mechanisms to gather missing or erroneous data. It's quite easy to identify the fields of information where customers or prospects generally tend to fudge the data. There are data quality tools that are adept at identifying, repairing, and replacing missing or erroneous information. They may be available in another system, or can be extracted from the existing data. Your Fraud Analytics team must create such mechanisms, along with standardizing data formats across multiple sources of information.
  • Unify entity information. Once all the data is integrated and missing or erroneous information rectified, the next step is compile all entity information in one place. Entity refers to the individual or company who could be present across different claims, applications and other documentation. Once the entity is identified as the same individual or company, all information about the person has to be aggregated into one single place. This makes it easy to detect fraudulent information, or any suspicious activity on part of the entity.
  • Better ways of handling unstructured text. Much of the data captured by insurance companies is in text format. However, there is no consistency in them as there are innumerable abbreviations, acronyms and jargon used by multiple users, not to mention typos and factual errors. The organization must harness techniques such as machine learning, natural-language processing, thesaurus of industry keywords, etc, which will make Fraud Analytics easier, and more effective.

Managing data better is critical to detecting fraud. Organizations must invest in tools, software, systems and skilled human capital to tackle the problem. Contact DNF today to begin the conversation about upping your Data Security, for your firm and your customers.