BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Social Credit: Much More Than Your Traditional Financial Credit Score Data

Following
This article is more than 4 years old.

We are used to thinking only about our financial behaviors when trying to improve our credit scores. That’s probably because since the 1950s, FICO’s ranking formula has been an industry standard for consumer credit of all kinds. But in recent years we have seen more and more marketplace lenders ditch the traditional FICO scores for data-driven business models that analyze behavioral as well as social information from a variety of online sources to determine applicants' creditworthiness.

Typically, the marketplace lenders’ business models are based on proprietary algorithms labeled as trade secrets, which protect the exact scoring methods for all credit scores. Frequently, credit rating companies justify the secrecy as a means to keep competitors from learning how their systems are built and operated, and to prevent scored individuals from deceiving the lender by falsifying their applications to reach a desired score.

This shift in credit ranking has been happening not just in the U.S., but all over the world. For example, China has adopted a new social credit system that offers a real world version of a dystopian society, in which individuals are ranked based on every one of their interactions and transactions. As a result, seemingly arbitrary things are included in the ranking, such as how individuals manage their online social activity, how often they consume fast food, or even whether they pick up after their dogs. And if the rankings showcase poor judgment, the result will be more than low credit scores that negatively impacts the scored individuals’ consumption, home rental or employment prospects. A lower credit score will also impact the scored individuals’ social circles, social mobility and social capital.

In the U.S., up until recently, the term credit typically referred to one’s predicted financial standing and trustworthiness.  And credit really was limited to that primarily because uncertainty around the antidiscrimination laws largely kept traditional banks and even FinTech lenders from using more alternative data, including information gleaned from social media, despite the belief that posts, pictures, shares, likes, one’s contacts, and even typing habits and writing styles all can play a role in gauging financial trustworthiness of users. 

But lately, marketplace lenders have been trying to incorporate more and more valuable sources in their credit assessment, believing that it helps them generate significantly more accurate results. And that changing landscape has been endorsed by the U.S. government. Just last week, on December 3, 2019, various government agency leaders announced that credit scores in the U.S. have officially gone beyond the traditional financial behavior principles.  And while they did not announce any specific standards, regulators formally backed the use of alternative information – not traditionally used by the national consumer reporting agencies – in calculating consumers’ credit scores. Focusing specifically on borrowers’ cash flow, the regulators supported that such alternative information be used as a substitute to the traditional credit-evaluation system based on applicants’ past history of borrowing and repayments.

The regulators mean well. As explained in the official statement, alternative data “may help firms evaluate the creditworthiness of consumers who currently may not obtain credit in the mainstream credit system.” Experts estimate that about 45 million U.S. consumers lack the credit history required to produce trustworthy credit scores under the current system, in addition to the millions that cannot access credit because of their low scores.

Addressing this issue, FinRegLab, a nonprofit research organization, conducted studies of cash-flow variables and credit scores using data from six FinTech providers – Accion, Brigit, Kabbage, LendUp, Oportun, and Petal – and concluded that the data appears to have useful and independent predictive value, and can help bring more people under the financial services umbrella. In addition, the regulators recognized that the use of alternative data may improve the speed and accuracy of credit, and that such decisions might help lenders assess consumers who cannot otherwise obtain credit in the mainstream credit system.

Nevertheless, legislative and regulatory authorities must balance FinTech businesses’ promise of greater financial inclusion against the significant risks posed by incorporating alternative data and new methodologies. But the U.S. regulators’ announcement last week does very little in terms of balancing. Instead it merely contains a single sentence word of caution: “[T]o the extent firms are using or contemplating using alternative data, the agencies encourage responsible use of such data.” That is not enough.  We need to create clear standards in order to actually guarantee responsible use of data by FinTech businesses and banks. Indeed, Congress’ independent watchdog, the Government Accountability Office, declared in December 2018, regulators should clarify their stance on the use of alternative data in order to help lenders comply with fair-lending laws and assist banks to better manage potential risks resulting from working with FinTech businesses.

The risks are quite concerning. In a 2016 academic study, Professor Yafit Lev-Aretz and I voiced concerns as to the potentially harmful long-term effects of expanding the types of alternative data being used for credit ranking purposes and especially merging social data with financial data. Among the possible resulting risks we described were social segregation, decreased social mobility, and related privacy harms, in addition to the obvious questions about fairness and potential lending discrimination. Similarly, consumer advocates and scholars have argued that using alternative data could enhance income inequality and the gap between those with and without access to inexpensive credit.

But the danger does not end there. If not properly monitored and regulated, the concept of social-based credit in the U.S. can wreak more havoc. The American private sector is no stranger to social-based credit assessment. We are used to a Yelp-style ranking culture in which users and consumers rate businesses and individuals providing services, ranging from Uber drivers and Airbnb hosts to medical doctors and university professors. But while those rankings are of limited scope and practical application, comprehensive social scores may be underway as well. 

In 2015, Facebook patented a technology for loan approvals based on a user’s social connections. The patent documents explain that a loan application triggers an examination of the credit rating of the applicant’s social network contacts: ”if the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.” Facebook gave no clear indication as to its planned use of this patented technology in lending, but the company’s recent venture into the cryptocurrency world might give rise to an all-encompassing use of such social-financial mix, which would not be surprising.  After all, as a recent article explained, every startup or tech company wants to be a financial service provider now.

It is not surprising, therefore, that Facebook filed another patent application, which was published in 2018, that showcases potential social credit strategies. The patent, titled “Socioeconomic Group Classification Based on User Features,” describes an algorithmic process for determining users’ social class. Various data sources and qualifiers are analyzed to place a user into a "working class," "middle class," or "upper class" classification. Home ownership status, education, number of gadgets owned, and how much the user uses the internet, are some of the considered factors. The application explains that the algorithm is intended for use by "third parties to increase awareness about products or services to online system users." Facebook also assigns trustworthiness scores to its users. The scores, according to the company, measure credibility of users but are not meant to be an absolute indicator of a person’s credibility since Facebook uses many behavioral indications to understand and predict risk.

As businesses continue to further develop the concepts of credit, we should be mindful of the shift towards using more and more alternative data, including social information, as a substitute for the traditional credit-evaluation system, and be careful of the potential risks and problematic predictions. It would be wise for regulators to focus on the possible dangers and more explicitly limit businesses’ ability to use certain types of data for scoring purposes.

New York State has recently focused on such dangers. On November 25, 2019, Governor Cuomo signed legislation (S.2302/A.5294), prohibiting consumer reporting agencies and lenders from using certain information to determine an individual's credit worthiness. Moreover, the bill specifically prohibits determining an individual’s creditworthiness by using the credit scores of people in that individual’s social network.  Limiting the information that can be used to assess credit is not new. For example, limitations exist in the context of medical information – while an individual’s terminal illness could considerably affect his or her ability to repay a loan, regulation restricts the use of specific medical data for credit scoring purposes.   

In other contexts, such as insurance underwriting, similar limitations on the use of prohibited data are also being put in place. For example, on January 18, 2019, the New York State Department of Financial Services issued an insurance circular with guiding principles on the use of alternative data in life insurance underwriting. Specifically, insurers must independently determine that external data sources do not collect or use prohibited criteria, and should not use external data unless they can establish that it is not “unfairly discriminatory.” 

The country’s regulators should promote innovation and financial inclusion, but constantly examine existing anti-discrimination laws and regulations to ensure they don’t inadvertently bless discriminatory credit scoring systems that will be based on overly-broad alternative information. We must push our lawmakers to proactively pass legislation that mandates a distinction between legitimate ranking and adverse, risky, and socially harmful ranking, rather than waiting for the damage to be done and then reacting.

Follow me on Twitter or LinkedIn