Article

The Role of Mortgage Regulators in Generative AI

January 28, 2025
5 min read

By Tela G. Mathias

In speaking engaging with industry about the uses of generative AI (genAI) in mortgage, I sometimes get questions from regulators and housing agencies about what their role is in this question. As a housing agency, what is my role in guiding lenders and servicers as they innovate using genAI? How will this benefit homeowners and the American taxpayer? As a regulator, how do I ensure the industry honors our commitments to the American people and gives a helping hand where one is needed? How do I protect the consumer from harmful AI? How do I ensure disruptive innovation does not destabilize the mortgage markets?

I won’t pretend to have the answers to these complex, multi-faceted questions, but I do have opinions and, of course, more questions. Today I will focus on regulators and leave the housing agencies for another article. For the purposes of this article, I’ll define regulators as the Consumer Financial Protection Bureau (CFPB), the Federal Housing Finance Agency (FHFA), the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), and the Federal Reserve Board (Fed), along with the state-based agencies.

The goal of federal consumer protection law is to prevent borrower harm. To this I would add that as an industry we should also be creating customer delight. GenAI does introduce the potential for borrower harm – as an industry we must be careful about what types of generated content we put in front of consumers. And this applies to all modes of communication (voice, text, imagery). Introducing probabilistic technologies into the underwriting process should be done with care and with a human in the loop when decisions are made. Consumer experiences can be enhanced with additional self-service options and new ways of interacting that are faster and more personal, but they can also be degraded when these experiences are poor, without consumer consent, or where it is hard to get a knowledgeable human.

  • Under what circumstances is it ok for a machine to decide an outcome? What is the definition of a decision? Must a human always be involved? What if a consumer wants a faster decision and is willing to take the risk?
  • Should the industry be able to serve every single human, even if those humans wish to have an entirely analog process? GenAI solutions will seep into every piece of technology used today to make, service, and secure mortgages. If consumers can “opt-out” of genAI-based solutions, must the industry stand up entirely new paper-based processes as additional cost? How do we ensure digital consumers are not harmed by the burden of analog consumers?
  • Are we making it worse for consumers by overloading them with language they don’t read or care to understand? Using plain language is a strong suit of genAI, what if it’s just better?
  • The industry is made up of humans and machines. Humans and machines make mistakes. Mistakes happen today. If we make fewer and less impactful mistakes in the future using generative technologies, shouldn’t the penalty for failure be, well, less? Is there a better way to penalize the industry for mistakes?

Fairness is an American ideal and a personal value. No doubt, we fail to live this value at times, and I choose to believe that when we fail, we have to make it right and try to do it better the next time. A big part of fairness is transparency. If a process is not transparent, how can we really be certain it is fair?

Large language models (LLMs) are trained on massive bodies of knowledge, one of the “easiest” ones being the internet, simply because there is so much of it available. The internet is inherently biased. Therefore, if unmitigated, LLMs will be biased. Much of generative AI is probabilistic, and certainly it is a closely held secret as to not only how they work, but how they arrive at their decisions (Responses? Answers? See above – what is the definition of a decision?). And even if it wasn’t a secret, much of the math and science involved is so complex that many of us wouldn’t have the time to understand it anyway.

  • Could generative technologies, carefully used and expertly trained, actually make lending processes even more fair? Is it possible that human judgement and subjective human perspectives actually make it more likely that a lending or servicing outcome is unfair?
  • Is it really an option to do nothing? Can we wait to see what happens, and then penalize after the fact? Is that not a fairness issue as well?
  • How can we help educate innovators on how to practically achieve transparency? What is the definition of a transparent mortgage process? Is mortgage transparent today?

Mortgage is certainly one of the most heavily regulated industries in the world as, perhaps, it should be. For most people, getting a mortgage is the largest financial transaction of our lifetime. For most of us, it is the single largest financial obligation we have. Our homes can bring us safety, can decide the quality of our lives, can create wealth for us and for our families. Losing a home to foreclosure or a disaster can do exactly the opposite. It can end lives and create misery. We want to know that processes are safe and sound. That we can trust the partners we work with. That something will happen if we ask for help. Standard operations, safe and sound practices, for me this is about trust. I need to be able to trust the entities I work with, even if I don’t know that I work with them.

GenAI can produce results that are so convincing that only an industry expert could tell you if it was wrong. Some of the AI images we see are indistinguishable from “real” images. AI-generated voices will become (have already become?) so human, we will not be able to tell the difference. We are used to the way things are. Love it or hate it, we know it. And now this totally disruptive technology has entered the equation. This does create fear. This does mean things will change. And a new standard will need to be created. And that new standard will need to be safe and sound, without the benefit of 100 years of inertia.

  • Are today’s operations standard? Are they safe and secure? Is standardization good? Why is it good? Is there a better way?
  • We can’t create a magical fake detector because every fake detector will become obsolete as soon as it is understood. What is the new definition of safe and sound? Is there a new way to think about standards?
  • How do we create public trust when the public is inherently distrustful of new?

Erosion of trust in the mortgage industry increases the unlikely but not out of the real of impossible risk of market destabilization. As I sit here thinking about it, I have come to realize that this is potentially what regulators are most concerned about. GenAI can create a deep, very unsettling fear. That fear is very real in the people who feel it. Having read the entirely of Leopold’s “Situational Awareness” (an oldie in AI time but a goodie nonetheless), and while I don’t subscribe to the AI fearmongering, I do fear that AI in the wrong hands for the wrong purposes. For this reason, we do have to be cautious.

  • The best antidotes that I’m aware of for genAI all start with education. How can regulators work with industry to become aware? What does an AI-aware regulator look like? What responsibility does a regulator have to become AI-aware? On what timeline?
  • How can the industry use AI responsibly if no one defines what responsible use in mortgage actually is? Frameworks are fine, but frameworks are not specific enough to be especially useful, the devil is in the details – who will provide the details?
  • If no one provides the details and the industry figures it out as best it can, what are the consequences for industry if we “figure it out wrong”? If wrong is not defined up front, is it actually wrong?

This whole conversation is about weighing the benefits against the risks. Many of us are afraid of what bad things can happen. Regulators may feel a sense of personal responsibility to the homeowner, to the American taxpayer, and to the world in some cases. A responsibility to create a safer, more responsible mortgage industry. In 1995, the cost to originate a loan was $3,500, today it can be as much as $13,000. Is there not room in there for innovation? Safe innovation? Responsible innovation? I think there is. Let's partner as an industry to make it so.

Similar posts

Insights, Rules, and Experiments in the AI Era