NAAHL Webinar: From AI Bias to AI Balance: Harnessing Fairness Technologies in the Mortgage Market
[Music] n [Music] la [Music] e [Music] e [Music] [Music] e [Music] [Applause] [Music] e [Music] hello and welcome to our financing racial Equity webinar series which started during the pandemic to discuss how we can use Capital responsibly to expand Economic Opportunity for people in communities of color I'm I'm Buzz Roberts uh I'm president and CEO of null the National Association of affordable housing lenders null is uh the the nation's Alliance of major Banks Community Development financial institutions and others committed to affordable housing finance and inclusive neighborhood revitalization I want to thank our racial Equity committee led by our chair Lloyd Brown of City for uh pursuing this webinar series and our other racial Equity activities today's webinar is titled from AI bias to AI balance harnessing fairness techn IES in the mortgage market today we will discuss how artificial intelligence can either reinforce or combat racial bias in mortgage lending AI is emerging as an important technology that will affect us all in many ways but what does it mean for our work in affordable housing and neighborhoods is AI already affecting mortgage underwriting and how can we use it to advance both racial equity and prudent lending practice to answer these questions I'd like to introduce today's speaker Kareem Salah founder and CEO of Fairplay AI take it away Kareem thanks Buzz uh and thanks to Nal for having me I'm Kareem s uh founder and CEO of Fairplay uh we cheekily refer to ourselves as the world's first fairness as a service company uh and I'm uh delighted to be here today to share a little bit with you about our work applying Advanced art algor mic fairness techniques to Consumer loan underwriting in the mortgage Market uh let's begin so I'm gonna uh be discussing some data today from the home mortgage disclosure act database uh that's a database with which some of you may be familiar um the uh Home Mortgage disclosure act requires mortgage Originators to submit certain certain loan level data to the government every year which in theory allows the public to construct a picture of whether or not uh mortgage Originators are redlining in certain communities uh and really to paint a picture of kind of what the state of fairness is in the American Mortgage Market that word fairness I'm going to use a lot today and there are many different definitions of fairness that one could reasonably uh use some of those definitions conflict with each other when I use the word fairness today I'm going to be referring to the adverse impact ratio which is the first definition of fairness that courts and Regulators commonly apply to understand if one group experiences a positive outcome like approval for a mortgage at a higher or lower rate than another group so just to recap the data today is all publicly available data from the home mortgage disclosure act database and when I refer to fairness the question I'm asking is is one group experiencing a positive outcome like approval for a mortgage at a materially different rate than a control group let's dig in so let's think back to 2021 2021 of course was the height of the co 19 pandemic uh and it was a year that saw massive government intervention into the economy uh and so when we looked at the state of mortgage fairness in 2021 uh the results were arguably quite heartening because you can see each of these lines on this slide represents the state of mortgage fairness for protected groups in America going back to the 200 2008 financial crisis and almost all of these lines kind of start in the bottom left and end up in the top right so 2021 was the fairest year in the American Mortgage Market since the housing crisis and we were initially uh we initially sort of regarded that as a positive finding except when we ran the analysis back to 1990 uh what we saw was over a 30 plus year window none of these lines had actually moved very much which is to say for most groups uh mortgage fairness remains stuck at 1990s levels no net increase in mortgage fairness for most protected groups in America for the last 30 plus years uh and then of course 2022 happened and what happened in 2022 well interest rates on 30-year fixed mortgages nearly tripled over the course of about eight months so from you know in some cases sub 3% to north of 8% I was just checking this morning for people of moderate credit and conforming Loans mortgage rates today are somewhere between 8 and 8.3 3% uh and not only did mortgage uh rates triple but lenders started tightening credit rather sharply uh in early 2022 this data comes from the Federal Reserve from Fred uh and what you can see is kind of by the end of q1 of 20 uh uh EXC q1 of 2023 uh lenders 40% of American lenders had kind of tighten mortgage credit and so it's no surprise that against that backdrop of higher interest rates and tightening mortgage credit that mortgage loan volumes fell sharply in 2022 and not only did mortgage loan volumes fall sharply but they fell disproportionately more for Black and Hispanic home buyers uh so approval rates for almost every group in the mortgage Market fell in 2022 but they fell at higher rates for Black and Hispanic home buyers and it's not just the approval rates uh where disparities worsened it's also in loan costs uh so there has historically been a disparity in the loan costs paid by protected groups uh and for Hispanic home buyers that disparity was about $750 in 2020 in 2021 uh that disparity was about $750 in 2022 that disparity that Delta in that erity widen to about $1,500 so if you're a Hispanic home buyer in 2022 you're paying about $7,500 to close your loan whereas white home buyers were paying about $6,000 to close their loans if you want to see what this kind of looks like at a very granular level like all the way down to individual census tracks we can build Maps uh and so what you're looking at here is the state of mortgage fairness in 20 2022 for female home buyers uh in 2022 female home buyers were approved for mortgages at about 99.2% the rate of maale home buyers that's down from about uh 99.8% in 2021 and you can see even though women are almost approved at par in much of the mortgage Market there are still concentrations of unfairness in the South and the Great Plains and it turns out that these red and yellow counties uh all have one feature in common for the most part and that is that they are Rural and so when we look at the urban rural divide in the mortgage Market we actually see over the course of 90 years that urban areas are consistently fairer to female home buyers than suburban area than than rural areas uh there's about a twop point Delta In fairness actually two 2.5% Delta in mortgage fairness for women as between urban and rural areas and that Delta appears to persist uh for as far back as we have data which is going back to 1990 now let me show you the state of mortgage fairness for black home buyers in America in 2022 uh as you can see black home buyers were approved uh for mortgages at 76.5 4% the rate of white home buyers in 2022 uh there are uh strong concentrations of unfairness in the South and the Northeast uh and this number by the way is uh down about 10% from 20121 so in 21 2021 black home buyers were approved at about 84% the rate of white home buyers in 2022 it was 76.4% the rate of white home buyers uh
and Kareem why do we think that number dropped was is it just the tightening of the credit boxes in 2020 I think it's a com a combination both of tightening credit boxes we know that mortgage underwriting is Heavenly driven by credit scores and the distribution of credit scores is lower for black home buyers but yeah I presume it's also driven by affordability issues Buzz because when interest rates triple like that uh you can buy a lot less home uh or the or the cost of servicing alone uh uh on a house that you intended to buy is is much much higher so I think it's a probably a combination both of tightening credit uh and also of worsening affordability I I saw uh cfpb just came out with a HDA report today and they cited U income as an increasing factor in mortgage denial so that that would be consistent with what you're saying yeah um it turns out that if you're a black home buyer that states cluster into kind of one of three groups uh you have uh states that are like approaching fair so where black applicants black home buyers are approved at you know 89% the rate of white home buyers then you have a set of states that are uh kind of we would call sort of solidly unfair uh where black applicants are being approved at you know call it 80% the rate of white home buyers uh and then you have a set of states which are kind of really unfair to Black home buyers uh they're approving black home buyers at you know between 68 to 69% the rate of white home buyers uh those are basically the financial crisis era levels and one of the things that we observe as between the states that are approaching fair and the states that are really unfair if you look at this column on the right is you see that basically uh as uh as States get blacker and and Buzz as there as as median average in average median incomes in those States gets lower the uh fairness of the mortgage Market deteriorates considerably uh in fact there are five states uh you know Louisiana Mississippi South Carolina Arkansas Alabama that where no matter how good the macro environment no matter how low unemployment uh those five states all approved black home buyers at somewhere between you know 50 to 70% the rate of white home buyers this is the state of mortgage AFF fairness in 2022 for Native American Home Buyers uh you can see Native Americans are approved at about 75% the rate of white home buyers uh that as I'll show you in a minute is um way down uh from 1990 levels uh in fact it's down about sorry 25 20 percentage points from 1990 level so in 1990 Native Americans were approved for mortgages at 95% the rate of uh White home buyers uh in 2022 that number was 75% the rate of white home buyers and we sort of observe a similar pattern with respect to the unfairness for Native Americans that we observed for black applicants which is to say the more Native American your community the less Fair the mortgage Market is to you so when we build the map and we filtered out all of the uh census tracks from which there were fewer than 30 applications from Native American Home Buyers uh you get these uh census tracks that are kind of lit up here on the uh on the map and you can see for the most part uh they are red and yellow which is to say the where you have concentrations of Native Americans uh the mortgage Market on balance seems to be less fair to them uh and again we observe a kind of similar feature for Native Americans that we observe for black uh home buyers which is that uh states that have have large populations of Native American Home Buyers like new mer like New Mexico and Arizona uh exhibit very very low U mortgage fairness rates for Native Americans and that could be connected to perhaps tribal restrictions on leans that's right so one of the one of the reasons we theorize that this may be the case is that uh you know it's hard for mortgage Originators to perfect title uh in uh area you know on Native American lands uh where there are a complicated set of sovereign immunity issues yeah um here's the state of mortgage fairness for Hispanic home buyers um you can see Hispanic home buyers are approved for mortgages at around 85.2% the rate of white home buyers uh that's down from about 88.8 or 88.9% in 2021 uh and one of the things that I think is sort of cool about or heartening about the Hispanic map maybe in contrast to the black and Native American Maps is that if you look look at the areas that are known to be heavily Hispanic so like Southern Florida or the Border area in Texas around San Antonio or Southern California the those areas tend to be much fairer to Hispanic home buyers right so uh the more Hispanic your community the fairer the mortgage Market is to you but the more black and Native America in your community uh the worst mortgage outcomes you ex you um you experience uh Asian this is the map for Asian Pacific Islanders Asian Pacific Islanders are approved or in 2022 were approved at 94.7% the rate of white home buyers uh that's down from about 98% uh in 2021 um so approaching parity but still down about 5% year onye for asian-pacific Islanders so it's interesting looking at all these maps that California seems to have pretty good mortgage fairness and we know that in many parts of California home prices are very high so that I'm wondering if most of the applicants in California have to have relatively High incomes in order to afford those high home prices and that may be a distinguishing C characteristic for uh applicants of wall races in California I'm I'm sure that's a relevant Factor uh yes average median incomes in California are higher than the national average of course you have to weigh that against the higher property costs too so it's sort of difficult to know how that Nets out but if the but if I'm a moderate income um aspiring home buyer of any in California uh there may not be very much of stock for me to buy and so I'm just not going to be an applicant for mortgage that's true too that's true too there may be a kind of censoring effect in the data that's right uh so to just kind of like recap the state of mortgage fairness in 2023 you know for for most groups mortgage fairness is no higher today than it was in 1990 for black home buyers mortgage fairness appears to be stuck in neutral there's been an alarming drop in mortgage fairness for Native Americans going back to 1990 uh and if there's a silver lining it's that mortgage fairness for women has improved somewhat uh in the last 30 years now when I present these findings um I will sometimes say that these are the outcomes of credit decisions that were made uh through our effort to achieve fairness through blindness and let me explain what I mean by that so you know we passed the Equal Credit Opportunity Act and the Fair Housing Act 40 plus uh years ago and those um pieces of legislation prohibited the consideration of race gender age other protected status in making underwriting decisions and I think that that has led um some people to believe that like as unfortunate as some of these outcomes might be they are the result of credit factors or credit variables which are neutral and objective assessors of risk right so they say you know yes kareim it's unfortunate that we have such high disparities in the mortgage Market but these are all individuals who must be rightly denied because we are using neutral and objec credit factors to underwrite them and there's just one problem with that which is variables that appear objective often aren't um so we were recently doing an engagement with a lender and we plotted basically uh the variables they were using on a 2 by two uh axis as you can see here where the xaxis was how predictive is this variable of protected status so how predictive is this variable of being black in this instance uh and the y- axis was how predictive is this variable of default or risk and what you can see here is that things like the origination amount in your loan the number of months since your oldest trade line was opened the number of times you've been delinquent in the last 12 months are all highly predictive of race uh so these variables that we're using that we think we tell ourselves are kind of neutral and objective predictors of credit risk are actually encoding other information uh about race as well and the practice in the industry has been okay well let's just like make a list of all the variables and plot them by how biased they are uh and then drop uh or substitute Vari Ables for the for the ones that are kind of most correlated with race so you can see this is the 24 variables that this lender was using 1 through 24 listed at the bottom the Y AIS is how predictive of protected status they are uh and this in theory would allow a compliance officer or uh you know somebody at A lender to say well I'm just going to drop these two that are highly predictive of uh protected status and all of the other ones will not be predict all the other variables either not be predictive of protected status or only moderately predictive of protected status uh and there's uh just one problem with that which is the model bias persists even after the progressive elimination of the most biased variables and let me explain how that could be true uh or how how that's possible so basically these variables that appear on their own not to be predictive of race gender age other protected status when combined with other variables which also on their own are not predictive of race gender age protected status in code information about protected status uh and let me just give you a quick example of how that can be possible imagine for a moment that we were trying to build a model that predicted the sex of an individual and let's say as an input to that model I gave you a person's height well height is somewhat predictive of sex because men tend to be taller than women but height is not perfectly predictive of sex because there are really tall women in the world and really short men in the world and so what if I told you okay well in addition to height I'm going to give you weight as a variable well weight adds some incremental predictive power to our model because even even at the same height men tend to be heavier than women due to things like bone uh bone muscle density and testosterone of course the problem with a model that seeks to predict sex on the basis of height and weight is that it will predict every child to be a woman so what if I told you okay I'm going to give in addition to height and weight I'm going to give you birth date to control for the fact that there are children in the world now our model for predicting sex is looking pretty good but if I had told you a moment ago that birth date was predictive of sex you would have told me that I was crazy uh but this is just an example of how variables that are seemingly neutral can interact with one another in ways that humans couldn't possibly discern to encode information about protected status so just dropping the most biased variables the most obviously biased variables isn't going to work so my view is that it's time for those of us who work in the world of credit underwriting who care about affordable housing and fair housing need to kind of reckon with the reality that neutrality is a fallacy the variables that we are using uh in underwriting every single day uh are deeply rolent with information about protected status and we can't tell that just by looking at them and in some cases we can't even tell that by doing a sophisticated univariate analysis as I showed you you have to do a more complicated multivariate analysis and so I think if we admit that neutrality is a fallacy it kind of begs the question well is there another approach that might work uh and the evidence suggests that there is and we call that fairness through awareness and so what I'd like to talk to you today about is uh how we leverage uh fairness aware machine learning and artificial intelligence techniques at our company uh to work with lenders in ways that expose their models or their decisioning systems to the different Char credit characteristics of various groups in ways that allow you to set the weights on the variables you consider uh to be more sensitive to those groups that that is to say we work with lenders to adjust the weights on their variables to maintain their predictive power but minimize their disparity driving effect and let me show you how that works but in order for us to do that we need to do a quick conceptual um lesson on artificial intelligence so the thing about artificial intelligence another or another fancy word for that is algorithms they're just math but there are kind of math which must be given a Target uh to achieve an objective and so if you think about for example the Facebook algorithm uh social media algorithms those algorithms are given the target of keeping you engaged they are going to single-handedly pursue or so relentlessly pursue your engagement regardless of whether or not the stuff it's showing you to keep you engaged is good for your mental health or good for society you didn't tell it to care about those things you only told it to care about engagement right and if you stop and think about it giving an algorithm one objective to relentlessly pursue can have all sorts of unintended consequences right like imagine if Tesla gave the neural networks that power itself self-driving cars the mere OB objective of getting a passenger from point A to point B well the self-driving car might do that while driving the wrong way down a one-way Road blowing through red lights causing Mayhem to pedestrians right so what does Tesla have to do it has to give the the algorithm powering itself self-driving cars two Targets right get the passenger from point A to point B well also respecting the rules of the road and it turns out we can do this in financial services we can build models algorithms decisioning systems scorecards that Target which applicants are going to default while also minimizing differences and outcomes for protected groups uh we call that fairness through awareness uh and let me show you a bit about how that works so we were working with a model with a lender recently uh who had a model that was quite accurate you can see here 95.6
3% accurate that means for every hundred loans that this lender model evaluated every hundred applications uh it correctly predicted whether the applicant would default uh or uh or pay back 95.6 three% % of the time and it yielded these fairness outcomes that you see down below so it's actually uh a model that was quite fair to Hispanic applicants quite fair to asian-pacific Islanders relatively fair to women uh quite fair to applicants above the age of 62 but you can also see that it was approving black applicants at 75% the rate of white applicants which is below that four fifths threshold that 80% threshold that courts and regulators commonly looked to as being uh potential evidence of disperate impact uh and so this lender asked us uh you know could we uh do better by black applicants while staying within our risk tolerance and so the first thing we did was all right well let's look at this incumbent model what variables is it taking into account and to what extent and what we saw was there were basically 10 variables that were driving the overwhelming majority of this of the predictive power of this lender's model and you can see conventional credit scores were responsible for almost 70% of the predictive power of this lenders model and the remaining 30% was made up of these variables that are commonly found on a credit report like the number of times you've been delinquent in the last 12 months the number of inquiries on your report in the last 12 months the number of times you've been uh over limit on your balances Etc we then calculated okay if these are the variables that are driving your decisions what variables are driving differences in outcomes for one group relative to another uh and so what we saw was again credit scores were driving most of the disparity for protected applicants you can see that credit scores were driving about 71% % of the disparity for black applicants and the Aver and the rest of the disparity for black applicants was again being made up of these other variables uh that were found on the credit report that we were just talking about and so the question that this lender had was hey could we you know squeeze squeeze more predictive power out of the variables that we're not using very much but that appear not to have much of a disparity driving effect uh and reduce our Reliance on the variables that U we were um primarily relying on for to to evaluate credit risk but that seemed to have a large disparity driving effect and so this lender used our software uh and was able to identify seven models uh Each of which would have been as accurate or more accurate in fact many of these are about half a percent more accurate than the incumbent model uh all of which would have dramatically increased uh positive outcomes for black applicants and for women uh and so you can see here the Fairplay models number 1 through seven the accuracies are listed there each of them is about half a percent more accurate than the incumbent which is to say that the addition of that protected status Consciousness during model development the the use of that protected status information during model training added incremental predictive power to the model right so it's able to learn something additional uh by having access to that information uh that it wasn't able to learn previously about the credit characteristics of these groups uh and overall achieve a higher rate of accuracy while also approving more applicants in particular um from from black communities uh in fact model number one here would have increased fairness for black applicants from 75% to 87% that's a jump of about 16% and this lender was using our model in as a second look model a kind of a model that what was being used as a final step in their decisioning process uh to basically redecision all of the applicants that would have been denied by their incumbent model and so you could see that they were able to identify an additional 5,200 loans at a higher rate of accuracy than their incumbent uh that would have dramatically increased fairness uh for black applicants this lender only makes about six 60,000 loans a year so an addition of 5,200 loans uh is an increase in approval rate of about 10% so more good loans at a higher rate of accuracy uh to protected groups and when we uh look at the payback rates for those groups we see that the payback rates are virtually identical black applicants are paying back at 99.91% the rate of the white applicants and we say okay well what is this second look model what variables is it taking into account and to what extent what we see is that the second look model makes more use of more of the variables more evenly all right so remember that the primary model was relying on credit score for about 70% of its predictive power the second look model has reduced the influence on credit score by about 90% from 70% to 6.5% and it's tuned up the influence of these other variables which are also predictive of risk but have less of a disparity driving effect for black applicants so this lender is able to make more money and do more good uh simply by reweighting the variables that it was already using in ways that made their decisions more sensitive to historically underserved groups that's good for profits it's good for people and it's good for Progress um let me just conclude um by saying that like you know we are at a crucial moment uh with respect to the adop option of artificial intelligence in financial services and I think we're also at a crucial moment with respect to issues of financial inequality because you know for all of our efforts over the last 30 40 50 years to improve um you know the fairness of the mortgage Market to imp improve the fairness of the housing market uh to improve affordability the you know many of the key metrics that we look at haven't budged despite many great efforts to to solve these solve these problems and so we're at a moment now when algorithms are going to start making those decisions uh instead of people making those decisions and make them across a range of domains right not just underwriting and pricing but marketing and fraud detection and Collections and loss mitigation uh and I think there's a very real risk uh that if we don't act today uh that the bias of the past will be programmed into the digital decisions of the future uh and so you know I think it's incumbent upon all of us who work in these space in you know the world of credit underwriting and the world of Housing Finance uh as we uh prepare ourselves and prepare our AI strategies for the digital age uh that we take also take action to ensure uh that algorithms are not perpetuating uh bias in the data that train them uh that is uh doable there are increasingly good tools for doing it uh and I think it's time for us to ask the question whether or not fairness through awareness is a Time who is is an idea whose time has come anyway uh that's a little bit uh about what I wanted to share with you today uh Buzz why don't we take some questions if there are any great and um I see we have a question here from Marilyn Rivera go ahead yeah how do you determine which other metrics to look at when you're working with lenders yeah so there's um there's a bunch adverse impact ratio is one denial odds uh ratio is one uh we look at the average differences in price so like uh usually expressed in basis points but you can also do a computation called um a standardized mean difference which is the difference average difference in price but standard to standardized to account for variance uh there are several um different uh metrics that you can look at they typically relate to you know is one group experiencing a positive outcome at a higher lower rate than another group what's the difference in outcomes between groups uh and um is you know one group experien a negative outcome at a higher lower rate than another group uh we've actually um done a deck that shows how you can do these computations sort of on the back of an envelope uh and we're happy to share that with the audience or you can go get it at Fairplay doai um has this information been presented to the MBA or to the big Banks um yeah uh I uh I um have had the great privilege of being invited uh to uh the NBA Conference a few years ago uh I participate in a fora like this we're um heavily engaged with the National Fair Housing alot on a fair a mortgage fairness optimization study the results of which we will be U making public uh in January of 2024 um and so uh we are making uh every effort to kind of bring this message to the industry um and to the other stakeholders who have an interest in hearing it um looks like there's a question about HDA reporting right Huma reporting race ethnicity gender identified has decreased substantially yes I have seen estimates of 50% yes how do you think this is impacting your analysis right so um uh I think for sure it's this is still we still have pretty healthy sample sizes here I mean this this analysis over the course of 30 years I think is something like you know more than a hundred million applications for whom protected status information was given uh but the questioner is totally right we sometimes will deal with mortgage Originators um who uh are only collecting you know for or only whose um applicants are only volunteering you know 14 15% of the time their protected status information uh for those individuals or for those lenders we do use the cfpb approved proxy process uh that allows you to approximate demographic class membership on the basis of first name last name street address and age uh obviously we can't do that in the public records because we don't have that personally ident identifiable information that there are also some pretty well-known problems with proxy methodologies uh but you're absolutely right I think that the ecosystem would benefit from a lot more data sharing uh especially around protected status information uh is race documented at the data tract or the ZIP code I think it's actually at the loan individual loan level um and then and then the way bisg works is um it works at the census Block Level using data from the Census Bureau uh are you only considering mortgage activity do you have plans to expand into other areas actually we work across the full Waterfront of consumer credit the data I've shown you today is from mortgage but we work on installment loans credit cards auto loans handset device and yes uh increasingly small business because uh I'm sure as some of you know um section 1071 uh of the dodf Frank Act is uh likely to come into Force for the F full for the first time you know someplace in about 18 months although I understand that it's been uh slowed down a bit by some challenges in the courts uh but everybody but that's effectively uh Home Mortgage disclosure act like reporting regime and and so we much as we have publicly available data for mortgages today uh it is foreseeable that at some point in the future we will have uh Sim uh data for small business lending which in theory ought to also allow us to construct a picture of fairness in the small business loan Market um have you had an audience with the banking Regulators yeah we've been fortunate uh to brief all of the federal financial Regulators on this work whether it's OCC Federal Reserve uh cfpb FDIC um I think you know they are still coming up the curve and thinking about what the best interventions in this space ought to be although we counted all of the press releases from the cfpb last year that mentioned algorithmic bias or discrimination and we counted something like 16 uh so you know more than one press release a month out of the bureau last year um sending a signal to the market that these issues are very top of mind um how are fin how many financial institutions are using this Service uh so it's a banks in certain geographical locations using the service more than others yeah so we have several uh big Banks who are using the technology today I would say there are three primary use cases uh in the big Banks uh one is kind of um compliance automation which is hey we send our credit analytics teams uh our data scientists down a fair lending Rabbit Hole one quarter a year uh to do these kinds of uh testing and um is that automatable so uh one one area where we're experiencing kind of positive traction with uh big Banks is um on the compliance automation front another application we're seeing is obviously there's been a lot of consolidation and mergers uh in the banking sector and so one application we're seeing is hey we are a bank who bought another bank and now all of a sudden we have service areas um in communities that like we're not familiar with or we haven't historically um been doing business in and we now have to make sure that we're serving those communities equitably uh and you know uh making sure that our underwriting is sensitive to applicants from those communities uh making sure that we're doing a good enough job of actively reaching out to those communities and so one one other application we're seeing in the big Banks is hey we need to Ser we've either made some commitments or we need for one reason or another need to serve a community that is low and moderate income majority minority or just simply new to us uh can you help us uh kind of um with that uh the third application we're seeing in big Banks is around um you know we use our we lend our Charter to some other third party originator so you see that very commonly either in mortgage you know if we originate through uh uh you know independent mlos uh or uh or you know Direct Auto where we where you know lenders will originate through dealers and in both in all of those cases they want visibility into whether or not those third parties that are using their Charters to originate are treating consumers fairly uh and then the last application we're seeing is hey we want to push our first machine learning model into production and our compliance teams have never evaluated a machine learning model before and so we uh need some help building kind of technical tools for non-technical non-ai people to quickly understand if the models are fair if they pose a threat to the safety and soundness of their institutions or to the consumers they serve what are the flaws what are the push backs yeah so I would say that the biggest um there's there's maybe like two primary ones uh the first is as I mentioned um in order to do this kind of analysis you need protected status information and ecoa and the FHA say you're not allowed to consider race gender or age when you're making a credit decision which is totally reasonable but they don't say that you can't use some consciousness of protected status when you're building the model so that the weights are set in ways that are fair to everybody uh but I think over the years the prohibition on the use of protected status information um has been read as you can't use this data for anything at any time uh and I think that's an over reading of what the law is the law says you can't consider it at decision time but it doesn't say that like you can't use that information to build models that are fair to everybody and I think that's just going to take some time for the industry and for Regulators to wrap their heads around that because we're so used to the kind of abject prohibition that has uh you know kind of dominated our practice since the passage of Kaa and the Fair Housing Act Etc uh somebody's asked a question about my reference to second look yeah so this technology this fairness optimization technology I've described to you today you can take it as a lender in one of two ways you can either adjust your credit that you can either adjust the incumbent or the primary underwriting model uh or some um lenders don't want to do that they've for whatever reason they don't want to touch that primary underwriting model so they just want to use the fairness aware model in a kind of as a second look or a double check on the first model sometimes they do this as a part of like a special purpose credit program uh but um you can get the benefit of this technology either by tuning your incumbent strategy uh or by augmenting your incumbent strategy to redecision your declined applicants or the folks you approved who didn't take their loans maybe uh by optimizing pricing for them I just just on that last Point um why wouldn't a lender just build this into their primary uh model yeah well it all depends on um you know how hard or easy it is for you to push a new model into production um for some lenders that's very easy uh for other lenders especially if they've been cobbled together over the course of many Acquisitions and underinvested in their technology infrastructure pushing a model is a quite a bit of work it can take you know and some cases at Big Banks up to two years uh and so um uh or or for whatever or you may feel comfortable with that your incumbent model in a primary position but really want to use the second look model uh as part of some kind of commitment that you've made like a special purpose credit program or a Dei program Etc so there are um you know different institutions have made different commitments and giving them the flexibility both from a policy perspective but also from a technology perspective to get the benefit in one of a couple of different ways um they seem to appreciate so um in the first less Fair model you showed very heavy focus on credit scores there's more than one credit scoring model are they all the same with respect to fairness or some better than others um well you know there's one score that's kind of dominated the mortgage market for the last 30 plus years uh and fhfa has now just authorized the use of another score uh that score I think was authorized in part because there's evidence uh that it is uh more inclusive and does a better job of reaching certain communities um I think that you know what we really need is a comp and that's great but I think what we really need is a competitive Marketplace uh you know obviously with guard rails um but you know ultimately I think we maybe headed for a world in which a thousand scores Bloom uh and let there be comp and those scores may be you know um uh adjust maybe tailored to certain segments or uh certain products uh and so I think that democratization in the world of scores provided those scores are well validated and debiased uh is a good thing right we have another question in the chat green asking yeah the two ways yeah the two ways lenders can use the fairness optimization technology so you can either optimize your incumbent strategy or you can augment your incumbent strategy so you can say hey I just want to um adjust the weights on the variables in my primary model uh or you can say I'm going to leave my primary model in place but I'm going to put in a second look model or sometimes we call it a double check model that basically red decisions all of the declined applicants or the applicants who were approved but didn't take loans uh to see if you can either do a better job of the appro of underwriting them uh or if you can do a better job of pricing them how important are alternative um payment history metrics here U there's been a lot of talk in particular about such other measures as uh utility bill payment history or rent payment history U are those do those Help U maintain predictiveness and improve fairness yeah we we see that the closer you can get to a consumer's balance sheet um the fairer the underwriting generally is and so in my opinion the state-of-the-art in underwriting data today is cash flow data if you can actually see The Ledger entries in a bank account and make judgments about whether or not people you know consistently maintain positive Bank balances and so forth uh all of that but as you said uh uh tele Telco data rental histories um it turns out that like when you underwrite a loan you're really just like trying to make a judgment about how responsible an individual is and it turns out that you can find indish of a person's responsibility in all kinds of places if you care to look I see um go ahead sorry buz no go ahead I just I was gonna move to another question Phyllis has another question but uh we can just sticking with this for a moment um I've seen some data out of the a rent reporting uh company and it seems to have a sign ific effect on credit scores for renters in part because many of them I think have thin files especially lower income renters um and so I was struck by how that reporting seems to have increased credit scores and it really I'm not quite sure what to make of it it mean there could be some Heisenberg effect here uh there could be um uh it it could be that the credit models for thin files are just so um unprecise that um they're not really giving a a full picture of uh the person's credit capacity what what are your thoughts I I think um whether or not people pay their rent is highly predictive of whether or not they'll continue to pay housing expenses in the future uh and so if you can get that data it can allow you to be more Discerning at the margin and do we have do we have enough of that reporting now to really you know it's there have been improvements but um but it seems to me that uh and there's a lot of people working on that problem um for reasons that I don't entirely understand reporting is much more difficult than it ought to be um but but I'm heartened to see that like we have made some progress in that uh respect I think across all three credit bureaus and uh and I know that there's a number of technology companies and startups who are also uh sort of working to help those thin no thin and no file um you know uh uh uh tenants uh thicken their files and I noticed that um the uh cfpb recently put out a statement that said that lender are responsible for fair lending problems that could arise from third-party uh credit models yeah can you talk a little bit about that yeah I I I mean I think what the bureau is saying there is you can't Outsource your fairness to some third party uh and then kind of use a score from a third party and then throw your hands up and say well it wasn't me it was you know XYZ score provider uh and I think the other thing they're saying is you know in a in a world where you're using more you know data that's potentially intrusive like behavioral data or social media data you have to be you know forthcoming about the fact that you're doing that and you can't hide behind kind of vague reason codes when you deny people uh for a loan so an example I think they give um was of gambling you know you're if you're using uh someone's credit card data for underwriting and you're looking at their transaction history and you ultimately decide to deny them because you see that they go to a lot of online gambling sites that may be reasonable from a credit risk perspective uh but people might not love that you're mining that their data that way uh and they and the bureau's expectation is you're going to tell them I denied you because you're gambling too much not like because you've you know been uh shopping at some disfavored uh dis disf favorite retailer or something that's so vague and ambiguous that you don't have an accurate sense for how they're actually assessing your credit worthiness well that's great uh final word I think it's a very uh exciting time for those who of us who are working on fairness issues in the financial services industry uh even as the challenges are um potentially not I wouldn't say as great as they've ever been because we had these policies of explicit exclusion in the past but but now there's a bunch of glass ceilings it seems to me uh so in that sense um I I don't know if the situation is better or worse but it's a very exciting time for those of us who care about these issues because the technology is um you know making um change I think more possible uh in some ways than it's ever been uh and so thank you for the opportunity to kind of share the work uh that we're doing and thank you for the work that you guys all do and I appreciate very very much the efforts that Nal has put around racial Equity uh in the last several years and and I hope others will join you in that effort well thank you again Kareem for a wonderful presentation and very thought-provoking I also want to thank our audience and uh our racial Equity committee uh chaired by Lloyd Brown of City uh today's webinar will also be posted on our website in the next week that's www.
nah l.org and please join us for our annual policy and practice conference this year will take place November 8th at the national Press Club in Washington DC uh information and registration is also available on our website www. uh www.na.org again thanks to everyone and we will see you hopefully at our annual conference in November