Big Data Ethics

Big data ethics are a growing concern because decision engine technologies (e.g. AI/ML, and RPA) are consistently getting better. With this increased quality comes the ability to painlessly embed more attributes within customer workflows. Platforms such as Demyst have facilitated the ability to have 100,000 new attributes at a data scientists’ fingertips, in addition to seamless deployment. There are a wide range of predictive and compliant attributes among the available set for almost every workflow being optimized.

They can be used, but should they be?

Here we offer a framework for how to think through ethical frameworks when using big data, recognizing that it’s ultimately up to the enterprise to set their own boundaries.

Big Data Ethics

An Ethics Framework for Big Data: The attribute-centric approach

Any system being developed ultimately can be assessed through a variety of ethical lenses: e.g. the model type, use case, training data. We recommend an attribute-centric approach as an atomic unit around which enterprises review, debate, and document what should and shouldn’t be included.

This requires a strong infrastructure for defining attributes: their provenance, meaning, and potential as well as known biases. Any poorly defined attributes should likely be excluded – all the more reason you would want 100,000+ to start with!

Developing reliable systems requires a triage process – casting a wide net first then converging on the most parsimonious workflow later. Ethical considerations should be rigorously applied in the latter stage and not the former.

Unconstrained Big Data: Should Some Data Stay Uncollected?

Should sensitive attributes even be allowed to enter the building? We have seen many enterprises impose hard filters – sometimes for compliance reasons – to keep certain variables out of all systems. We argue for ethical reasons that it is acceptable and leads to better ethical outcomes to ingest attributes that might later be knocked out. A common example from insurance: in a controlled risk modeling sandbox, is it okay to evaluate a protected class attribute (attribute X)? Casting a wider net within an unconstrained study allows modelers and modeling tools to tease out interaction effects and reduce statistical model bias. Contrast 2 processes:

· Attribute X is excluded before ingestion. Models of risk in a sandbox find correlations with location(s) (which may innately correlate with X) as well as income. A subsequent policy review now requires a deep inspection of every attribute through the lens of whether it correlates to other sensitive attributes. This is more likely to cause the exclusion of location and result in a weak model.

· Attribute X is allowed to be included within a sandbox. Models find correlation to X, location, and income, and their interaction effects. A subsequent policy review can assess all attributes as standalone, and may determine to exclude X, but retain the predictive aspects of location and income (without retraining). This is a stronger model and requires less work in total.

Use cases

Attributes absolutely must be assessed within the context of the use case. What’s more, we feel the key determinant of an ethical cutoff (when to suppress any given attribute) is the moral obligation of the enterprise to know customers, as perceived by society, which lies on a spectrum. Some examples of how I personally view this:

· Low: Push and direct outreach marketing. This is an example of an area where an enterprise isn’t obliged to know me. I don’t mind seeing a billboard on the highway which is not customized to me, and I expect the same of digital advertising. So, when enterprises are using data for targeted marketing, I expect internal ethical reviews to err on the side of caution and exclude any variables that could be even remotely sensitive to me – where do I go, who do I know, what do I buy – all in my view are not data that a company is obliged to know.

· Medium: Convenience workflows and product features. In today’s world I expect, with opt-in, a painless user experience and an accurate product. I expect and want organizations to know me deeply to make my life easier, insofar as that knowledge is linked to making my life easier. So, ethically, I expect organizations to err on the side of accessing and using data they can find that helps them serve me better.

· High: Diligence and risk. As a member of society I feel enterprises have a strong obligation to find everything that is reasonable to find in order to mitigate negative behavior. A fair lending example, from an ethical perspective, is a bank obliged to check a person’s LinkedIn to see if they recently lost their job as part of an application process, because the bureau data may be stale? I think so, in which case I prefer enterprises to err on the side of retaining and using that information (whereas the same attributes would be completely unacceptable in a marketing use case

· Highest : Bad actors. Even more than in risk assessment, as a member of society I feel many enterprises have an obligation to find nefarious behaviour. For example, money laundering and synthetic fraud detection in banking – even if they weren’t regulated obligations – I would personally feel it is a banks responsibility as the best placed party to do so to collect data to find and address this. To take it further, are they obligated to collect data even if it contradicts other norms or regulation, e.g. by scraping, sharing data, moving data cross border, or using data without appropriate consent? If the use case is finding a terrorist, then are the “ethical gloves” off?

Centralized filtering with decentralized debate

Coupled with initiatives around MRM, most enterprises are building a single system of record for all models and heuristics. An attribute centric approach allows for a clear checkpoint before models are deployed into such a system.

Clearly, one requires clarity on the attribute, use case, a sign off process, and the associated workflow tool to enforce this. The hard part is, who decides?

We advocate an open discussion within a relevant team for the given use case, with executive oversight by the CDO.

There is typically no regulated right and wrong. And there is likely no enterprise wide definition of right or wrong, beyond extremely obvious edge case attributes. Ethics, like politics, is defined in the margins. People know what they like and don’t like when they see it. Experienced people can be trusted to depend on their instincts.

Unlike compliance, which is best addressed through a centralized team, enterprise data ethics is best addressed by those who live in the use case and data.

Is it ok to use whether people have children within an AML model? Defer to the process leader who has deep experience with manual AML reviews, and engaging external stakeholders to justify what is fair – that team needs to be involved to decide what attributes matter.

This is analogous to the Swiss canton system of voting.

And recognizing that ethical views evolve, and enterprise teams change, this debate needs to be regular and ongoing, not just on model deployment.

External and Internal Big Data Ethics

Whether a given customer attribute should be used is an ethical question that is independent of whether it is internal or external, and the above framework for deciding that is the same.

That said there a range of critical practical matters when it comes to external data :

· Provenance : Ethical debates require clearly documented and audited source information

· Re-assessment : External data requires more refrequent re-modeling, re-assessment, and re-debate among key stakeholders than internal, because the rate of innovation and change is typically faster

· Contracting : Ethical responsibility cannot be passed to a counter-party. There’s no such thing as an indemnification clause that helps a chief data officer look at themselves in the mirror and feel good about what data is being used. What needs to be contracted are reps and warranties about definitions and being notified of updates that would trigger re-assessment

This is not a new debate and it’s no different in principle to areas such as subjective assessment of discriminatory hiring practices. The difference is that now with the proliferation of AI systems and data, the scale and complexity of this debate requires new tools and leadership and requires delving into the nuance of interactions between variables.

Mark Hookey

Mark Hookey

CEO and Founder
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

More to explore

The Demyst Approach to “Agile Data”

Analogous to the contrast between waterfall and agile methodologies in software development, Agile Data is focused on achieving a minimum viable releasable improvement. Ignore the scrum masters for a moment; what allows organizations to follow

Read More »

Just Click Here

In today’s marketing landscape, the function of lead generation has expanded beyond the traditional mandate of procuring contact details and now requires that companies have an in-depth understanding of their customers’ firmographic data. Just Click

Read More »

Digital Verification APIs

Uber, Amazon, Netflix and every other breakout success of our age set the new standard in consumer engagement – leverage technology and data to enable 1 click to buy. They weren’t different processes, retailers previously

Read More »
Close Menu