What 2017 Has in the Works for Financial Technology

That time of the year has arrived and December travel time provides a moment to look back on the past year’s progress and make predictions for what may or may not lie ahead in the coming months. Since we’re living and breathing the data landscape and financial technology, we couldn’t resist sharing our views and expectations for the data analytics ecosystem in 2017 and how this will impact the financial industry.

Better Data Versus Better Models

If there is one thing we data scientists love to do is to compete on the quality of our models. The community rightfully takes a lot of pride in pushing the boundaries of what data modelling can predict to enable greater access to credit for new and underserved segments of consumers. But with the predictive analytics market slowly becoming commoditized, standing out based on algorithms alone rather than the data being used will become increasingly challenging. Our desire to produce the best, most comprehensive model may also eventually come back to haunt us. Recent surges in default rates for online consumer loans highlight the risk of potentially overfitting statistical models, which can have serious repercussions, and even prompted the fintech industry to face a moment of introspection. Banks will therefore need to move beyond just holding models to a high standard and pay more attention to the quality of the data they leverage.

You may think that’s easier said than done. And you would be right. The emphasis these days on incorporating more and more data into corporate decision making may feel a little overwhelming. As IBM’s CEO Ginni Rometty pointed out back in September, businesses will need to get used to making data-driven decisions to remain competitive. This applies to companies across the board, from the financial industry to retail and healthcare. But with Big Data expected to grow to $203 billion per year by 2020 from $130 billion today, and companies facing piles of increasingly complex and varied datasets, the real differentiating factor will be the ability to extract value from the right data to fix the problem at hand. In the case of financial institutions, this will mean using data to better verify potential prospects, accurately gauge levels of fraud risk, and expand credit opportunities to historically underserved populations.

Data Transparency for the Digital Generation

As much as we love predictions, 20 years ago, few people could have foreseen how huge Big Data would become in today’s society. Consumers who are now in their thirties opened their first email account during adolescence (which explains some of the email names …). They had no real conception or point of reference for how this exchange of information online would impact their digital footprint.

But those same consumers learned fast and today they want greater control over their data. Indeed, individuals are becoming more invested in how companies use their personal information — from Facebook posts, and browsing habits, to credit decisions. Firms, such as Credit Karma, are also helping their clients significantly improve their financial data literacy and take a vested interest in their information. Financial organizations, and banks in particular, will need to adapt their strategies to customers’ growing sense of pride, concern, and ownership over their personal data.

Alternative Data as the New Normal

Long gone are the days when traditional data — originating from application information or credit bureaus — used to be all that a lender needed to get to know their customers. To be fair, this system was originally designed to help local branch managers make lending decisions about customers they already knew very well and interacted with face-to-face. Today, the digital space has made relying solely on traditional data far more inadequate, providing only a partial picture of a customer’s true level of risk. This challenge has partially led to the growth of the “thin file” syndrome, where millions of creditworthy consumers are refused loans as they do not have a long enough payment record to pass historical credit tests.

That’s where alternative data comes in. Adding more information to the equation has proven to be hugely resourceful, and it’s only beginning. However, the devil is in the details. As software technology grows to compliantly aggregate more and more data to better verify potential clients — from emails, and mobile calling patterns, to social media — the key will not only be to provide a wider array of available data, but also to check for consistency across all these different databases and choose which data sources are most insightful. Does the name match the address? Check. Does the phone owner and phone number match across credit bureau reports and registries? Check. Is the age of the customer the same across its various social media profiles? Check. Only then will the data bring accurate risk to light or give realistic levels of confidence on prospective clients, helping lenders really know and extend credit to their customers.

By Kevin McCarthy, Chief Customer Officer



Share on facebook
Share on twitter
Share on linkedin

Get in touch

More to explore

Demsyt Python API

Demyst – Hosted Notebooks

Python has clearly taken over big-data programming language of choice over the last few years. DemystData uses the power and ease of python to provide external data through our Python API. And with Binder, we

Read More »

The External Data Imperative

“First we have to get our internal data in order”. It’s something we hear frequently. Why invest externally when there is an underutilized resource on hand? Today’s article breaks down the dimensions of when to

Read More »
Close Menu