Library > , , ,

Assessing the risks of fair lending target marketing 

Tim Latshaw

Published:

Summary

The nature of modern targeted marketing could lead to unintended fair lending violations despite a financial services organization’s best intentions. This blog considers digital redlining and how to exercise due diligence when using any form of marketing. We also consider the potential and pitfalls of using artificial intelligence (AI) in addressing fair lending risks.

A professional woman with curly hair, wearing a green blazer, engages in a discussion about 'fair lending target marketing' with colleagues around a table in a bright office setting, gesturing expressively.

The specifics of fair lending rules continue to evolve, but their core anti-discriminatory tenets should remain key to any financial services organization’s policy. Banks, credit unions, and lenders should not deny their offerings based on factors such as race, color, sex, religion, national origin, disability, and marital status. Finding a financial services official unaware of these principles should be challenging.

However, fair lending risks must be considered in marketing as well. Some might wonder if much concern over discrimination still exists in an age controlled through digital means, where more actions have been taken out of human hands and placed into seemingly objective algorithms.

The short answer is yes.

In fair lending, omission can lead to digital redlining

Fair lending risk can arise in what your marketing says, how it is presented, and who does or does not see your marketing.

An advertisement for a financial service could contain fully inclusive language and imagery. However, if that advertisement is distributed in a way that most or all the people you wish to include would never see it, that could be considered a fair lending violation.

This form of discrimination is known as digital redlining, and digital target marketing controls could unintentionally cause an organization to discriminate in this manner.

A potential benefit of digital target marketing is its ability to focus advertising where it might be most effective and provide more returns on marketing investment. This method can be perfectly suitable for many products but could generate concerns when fair lending risks must be considered.

Depending on the data sourced and the algorithms used, a digital targeting method could unintentionally discriminate against certain groups based on their physical locations or specific habits, preferences, or financial patterns exhibited online. In other words, a marketing algorithm could focus intently on a specific group or area of people to the detriment of eligible individuals outside its criteria.

Depending on the data sourced and the algorithms used, a digital targeting method could unintentionally discriminate against certain groups based on their physical locations or specific habits, preferences, or financial patterns exhibited online.

Federal regulators and major social media companies have taken action regarding digital redlining. In March 2019, Facebook (now Meta) eliminated the ability for ad purchasers to explicitly choose to exclude targeting their ads to specific races, ages, and genders. In June 2022, Meta announced further overhauls to potentially discriminatory features in its advertising algorithms specific to the distribution of housing ads on Facebook.

Other regulatory bodies and digital platforms are also starting to impose rules about AI use. In the legal field, some state bar associations are issuing guidelines about confidentiality and marketing. YouTube also requires disclosures of synthetic or AI-supplemented content.

While these steps are encouraging, they should not be taken as a guarantee that digital redlining has been entirely eliminated from any process. Yet, despite these challenges, digital marketing isn’t something you should eliminate from your marketing strategy. Doing so will put your organization at a significant disadvantage.

What should financial services organizations do? Exercise due diligence and caution when using any form of digital target marketing. Risk-reducing measures include:

  • Learning as much as possible about any target marketing programs they are using.
  • Asking for information regarding target marketing programs agencies and other companies might use on their behalf.
  • Evaluating any filters used within target marketing.
  • Requesting and analyzing reports on audiences reached through target marketing.
  • Reviewing and addressing any complaints received regarding digital marketing.

Will artificial intelligence (AI) and machine learning reach a point where you can comfortably place your marketing in the hands of target marketing algorithms? Perhaps. But not right now. These tools are still largely influenced by the element people using them might be hoping to eliminate: human bias.

AI is not perfect in addressing fair lending risks in target marketing

The potential of AI and machine learning for crunching information, predicting outcomes, and making decisions is wide-reaching and ambitious. Employing such tools can determine connections within massive datasets that humans might overlook – and certainly not as quickly as these tools are capable of.

Yet AI’s and machine learning’s superhuman capabilities do not mean they exist in a realm outside of human influence. And where fair lending and targeted marketing are concerned, that can contain significant risk.

AI and machine learning models, such as ChatGPT, Claude, and others that may influence target marketing or copywriting programs did not emerge as fully formed marketing or copywriting gurus. They must be trained on previously existing data to make conclusions and predictions.

Ultimately, an AI or machine learning model can only be as “objective” as the data it was trained on. At least some, if not all, that data was collected by humans and, in turn, can be intentionally or unintentionally tainted by personal or historical biases.

For example, an AI or machine learning model trained on real estate data in a certain location over the past 100 years could end up with a data set where most homeowners are white. Although the intention of the training would not be to target white people in advertising within this area, the historically overwhelming weight of the data might lead to discriminatory targeting. In such ways, historical redlining could result in the perpetuation of digital redlining.

We are not saying that AI or machine learning models should never be a part of advertising or target marketing processes. At LaFleur, we use AI like an assistant to help us compile and organize data. However, handing the reins over to AI is not something we recommend without substantial controls and consistent human supervision.

Develop an inclusive, fair lending targeted marketing strategy

When it comes to avoiding fair lending violations in financial services, pinpoint target marketing might not be as effective as demonstrating general availability to your communities.

It’s not just about regulatory compliance, either – although that certainly remains crucial. The more tightly a campaign is restricted, the more assumptions are made about who will benefit from certain services. And when those choices are primarily in the hands of AI, we’re not in control of those assumptions. Broader target marketing that still functions within good reason can open the field of potential customers and put more faith in those customers knowing what they want – and that they can get it from you.

LaFleur helps organizations develop marketing strategies that follow fair lending guidelines. We take a human-led approach that leverages technology while monitoring for risks and further opportunities. Schedule a consultation, and let’s discuss how we can build you a responsible and effective marketing strategy.

Tim Latshaw is a content specialist for LaFleur, focusing on B2B marketing strategies.