External Data Ethics

Decision engine technologies including AI/ML, and RPA, are just getting better and better, and with them have come the ability to painlessly embed more attributes within customer workflows. Platforms such as Demyst, have facilitated the ability to have 100,000 new attributes at a data scientists’ fingertips, in addition to seamless deployment. There are a wide range of predictive and compliant attributes among the available set for almost every workflow being optimized.

But just because they can be used, should they?

Here we offer a framework for how to think through data ethics, recognizing that it’s ultimately up to the enterprise to set their own boundaries.

Framework: An attribute-centric approach

Any system being developed ultimately can be assessed through a variety of ethical lenses — e.g. the model type, use case, training data. We recommend an attribute-centric approach as an atomic unit around which enterprises review, debate, and document what should and shouldn’t be included.

This requires a strong infrastructure for defining attributes — their provenance, meaning, and potential as well as known biases. Any poorly defined attributes should likely be excluded — all the more reason you would want 100,000+ to start with!

Developing reliable systems requires a triage process — casting a wide net first then converging on the most parsimonious workflow later. Ethical considerations should be rigorously applied in the latter stage and not the former.

Unconstrained science

Should sensitive attributes even be allowed to enter the building? We have seen many enterprises impose hard filters – sometimes for compliance reasons – to keep certain variables out of all systems. We argue that for ethical reasons it is acceptable and leads to better ethical outcomes to ingest attributes that might later be knocked out. A common example from insurance, e.g. is it’s ok to – in a controlled risk modeling sandbox – to evaluate a protected class attribute (attribute X)? Casting a wider net within an unconstrained study allows modelers and modeling tools to tease out interaction effects and reduce statistical model bias. Contrast 2 processes:

  • Attribute X is excluded before ingestion. Models of risk in a sandbox find correlations with location(s) (which innately correlate with X) as well as income. A subsequent policy review now requires a deep inspection of every attribute through the lens of whether it correlates to other sensitive attributes. This is more likely to cause the exclusion of location, and result in a weak model.
  • Attribute X is allowed to be included within a sandbox. Models find correlation to X, location, and income, and their interaction effects. A subsequent policy review can assess all attributes as standalone, and may determine to exclude X, but retain the predictive aspects of location and income (without retraining). This is a stronger model, and requires less work in total.

Use cases

Attributes absolutely must be assessed within the context of the use case. What’s more, we feel they key determinant of an ethical cutoff (when to suppress any given attribute) is the moral obligation of the enterprise to know customers, as perceived by society, which lies on a spectrum. Some examples of how I personally view this :

  • Low: Push and direct outreach marketing. This is an example of an area where an enterprise isn’t obliged to know me. I don’t mind seeing a billboard on the highway which is not customized to me, and I expect the same of digital advertising. So, when enterprises are using data for targeted marketing, I expect internal ethical reviews to err on the side of caution and exclude any variables that could be even remotely sensitive to me — where do I go, who do I know, what do I buy — all in my view are not data that a company is obliged to know.
  • Medium: Convenience workflows and product features. In today’s world I expect, with opt-in, a painless user experience and an accurate product. I expect and want organizations to know me deeply to make my life easier, insofar as that knowledge is linked to making my life easier. So, ethically, I expect organizations to err on the side of accessing and using data they can find that helps them serve me better.
  • High: Diligence and risk. As a member of society I feel enterprises have a strong obligation to find everything that is reasonable to find in order to mitigate negative behavior. A fair lending example, from an ethical perspective, is a bank obliged to check a person’s LinkedIn to see if they recently lost their job as part of an application process, because the bureau data may be stale? I think so, in which case I prefer enterprises to err on the side of retaining and using that information (whereas the same attributes would be completely unacceptable in a marketing use case
  • Highest : Bad actors. Even more than in risk assessment, as a member of society I feel many enterprises have an obligation to find nefarious behaviour. For example, money laundering and synthetic fraud detection in banking — even if they weren’t regulated obligations — I would personally feel it is a banks responsibility as the best placed party to do so to collect data to find and address this. To take it further, are they obligated to collect data even if it contradicts other norms or regulation, e.g. by scraping, sharing data, moving data cross border, or using data without appropriate consent? If the use case is finding a terrorist, then are the “ethical gloves” off?

Centralized filtering with decentralized debate

Coupled with initiatives around MRM, most enterprises are building a single system of record for all models and heuristics. An attribute centric approach allows for a clear checkpoint before models are deployed in to such a system.

Clearly, one requires clarity on the attribute, use case, a sign off process, and the associated workflow tool to enforce this. The hard part is, who decides?

We advocate an open discussion within a relevant team for the given use case, with executive oversight by the CDO.

There is typically no regulated right and wrong. And there is likely no enterprise wide definition of right or wrong, beyond extremely obvious edge case attributes. Ethics, like politics, is defined in the margins. People know what they like and don’t like when they see it. And experienced people can be trusted to depend on their instincts.

So, unlike compliance which is best addressed through a centralized team, ethics is best addressed by those who live in the use case and data.

Is it ok to use whether people have children within an AML model? Defer to the process leader who has deep experience with manual AML reviews, and engaging external stakeholders to justify what is fair — that team needs to be involved to decide what attributes matter.

This is analogous to the Swiss canton system of voting.

And recognizing that ethical views evolve and enterprise teams change, this debate needs to be regular and ongoing, not just on model deployment.

External data

Whether a given customer attribute should be used is an ethical question that is independent of whether it is internal or external, and the above framework for deciding that is the same.

That said there a range of critical practical matters when it comes to external data :

  • Provenance : Ethical debates require clearly documented and audited source information
  • Re-assessment : External data requires more frequent re-modeling, re-assessment, and re-debate among key stakeholders than internal, because the rate of innovation and change is typically faster
  • Contracting : Ethical responsibility cannot be passed to a counter-party. There’s no such thing as an indemnification clause that helps a chief data officer look at themselves in the mirror and feel good about what data is being used. What needs to be contracted are reps and warranties about definitions and being notified of updates that would trigger re-assessment

This is not a new debate, and is no different in principle to areas such as subjective assessment of discriminatory hiring practices. The difference is that now with the proliferation of AI systems and data, the scale and complexity of this debate requires new tools and leadership, and requires delving into the nuance of interactions between variables.

(Visited 1 times, 1 visits today)
Mark Hookey

Mark Hookey

CEO and Founder
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Get in touch



More to explore

Close Menu