By Tela G. Mathias
In speaking engaging with industry about the uses of generative AI (genAI) in mortgage, I sometimes get questions from regulators and housing agencies about what their role is in this question. As a housing agency, what is my role in guiding lenders and servicers as they innovate using genAI? How will this benefit homeowners and the American taxpayer? As a regulator, how do I ensure the industry honors our commitments to the American people and gives a helping hand where one is needed? How do I protect the consumer from harmful AI? How do I ensure disruptive innovation does not destabilize the mortgage markets?
I won’t pretend to have the answers to these complex, multi-faceted questions, but I do have opinions and, of course, more questions. Today I will focus on regulators and leave the housing agencies for another article. For the purposes of this article, I’ll define regulators as the Consumer Financial Protection Bureau (CFPB), the Federal Housing Finance Agency (FHFA), the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), and the Federal Reserve Board (Fed), along with the state-based agencies.
The goal of federal consumer protection law is to prevent borrower harm. To this I would add that as an industry we should also be creating customer delight. GenAI does introduce the potential for borrower harm – as an industry we must be careful about what types of generated content we put in front of consumers. And this applies to all modes of communication (voice, text, imagery). Introducing probabilistic technologies into the underwriting process should be done with care and with a human in the loop when decisions are made. Consumer experiences can be enhanced with additional self-service options and new ways of interacting that are faster and more personal, but they can also be degraded when these experiences are poor, without consumer consent, or where it is hard to get a knowledgeable human.
Fairness is an American ideal and a personal value. No doubt, we fail to live this value at times, and I choose to believe that when we fail, we have to make it right and try to do it better the next time. A big part of fairness is transparency. If a process is not transparent, how can we really be certain it is fair?
Large language models (LLMs) are trained on massive bodies of knowledge, one of the “easiest” ones being the internet, simply because there is so much of it available. The internet is inherently biased. Therefore, if unmitigated, LLMs will be biased. Much of generative AI is probabilistic, and certainly it is a closely held secret as to not only how they work, but how they arrive at their decisions (Responses? Answers? See above – what is the definition of a decision?). And even if it wasn’t a secret, much of the math and science involved is so complex that many of us wouldn’t have the time to understand it anyway.
Mortgage is certainly one of the most heavily regulated industries in the world as, perhaps, it should be. For most people, getting a mortgage is the largest financial transaction of our lifetime. For most of us, it is the single largest financial obligation we have. Our homes can bring us safety, can decide the quality of our lives, can create wealth for us and for our families. Losing a home to foreclosure or a disaster can do exactly the opposite. It can end lives and create misery. We want to know that processes are safe and sound. That we can trust the partners we work with. That something will happen if we ask for help. Standard operations, safe and sound practices, for me this is about trust. I need to be able to trust the entities I work with, even if I don’t know that I work with them.
GenAI can produce results that are so convincing that only an industry expert could tell you if it was wrong. Some of the AI images we see are indistinguishable from “real” images. AI-generated voices will become (have already become?) so human, we will not be able to tell the difference. We are used to the way things are. Love it or hate it, we know it. And now this totally disruptive technology has entered the equation. This does create fear. This does mean things will change. And a new standard will need to be created. And that new standard will need to be safe and sound, without the benefit of 100 years of inertia.
Erosion of trust in the mortgage industry increases the unlikely but not out of the real of impossible risk of market destabilization. As I sit here thinking about it, I have come to realize that this is potentially what regulators are most concerned about. GenAI can create a deep, very unsettling fear. That fear is very real in the people who feel it. Having read the entirely of Leopold’s “Situational Awareness” (an oldie in AI time but a goodie nonetheless), and while I don’t subscribe to the AI fearmongering, I do fear that AI in the wrong hands for the wrong purposes. For this reason, we do have to be cautious.
This whole conversation is about weighing the benefits against the risks. Many of us are afraid of what bad things can happen. Regulators may feel a sense of personal responsibility to the homeowner, to the American taxpayer, and to the world in some cases. A responsibility to create a safer, more responsible mortgage industry. In 1995, the cost to originate a loan was $3,500, today it can be as much as $13,000. Is there not room in there for innovation? Safe innovation? Responsible innovation? I think there is. Let's partner as an industry to make it so.