© 2024 Arizent. All rights reserved.
BankThink

Before they regulate AI, Congress needs to define it

The emerging debate on machine learning and artificial intelligence can sometimes sound like a science fiction externality, destined to rise in complexity until it fights humans for supremacy.

But AI is a tool employed every day, across every industry: streamlining our shopping, banking and entertainment experiences. These data tools give computers some logical task that until recently, had to be done by humans. AI is not a threat. It empowers humans.

The Senate Banking Committee recently met to hear concerns about “data brokers and the impact on financial data privacy, credit, insurance, employment and housing,” where the impact of machine learning and AI was prominently featured. A number of senators have raised questions about the use of AI in the past but there was a noticeable uptick in concern at the recent hearing.

Senators questioned the role of automated decision making in the context of credit scoring accountability, and the prospect of bias and malicious manipulation. One lawmaker insinuated companies might need to turn over their proprietary predictive models to regulators to “evaluate them for bias and other legal compliance” issues.

While any constructive discussion to better understand these technologies is commendable, Congress and regulators need to first agree on definitions for AI and machine learning before having meaningful debates on the risks, benefits and impending future regulations.

AI exists on a spectrum, from weak to strong, of its capacity to mimic or replicate a task without explicit instructions. In financial services it can be as unsexy as application optimization and fraud prevention, to headline-grabbing cybersecurity defenses, or smarter lending and creditworthiness calculations.

Industry terms like “machine learning” more accurately describe the complex learning tools that let humans create parameters, and use the power of computers to quickly see patterns that might have been overlooked. The credit industry was an early adopter of machine learning and has relied on unusually specific definitions and descriptions of how AI should work for some time.

The Equal Credit Opportunity Act of 1974 and the Community Reinvestment Act of 1977, among others, specifically regulate the decision-making tasks in underwriting and credit scoring, whether analog or by machines. What many don’t understand is that most machine learning processes at firms today are human-supervised; subject to tinkering, testing and in finance, regulatory scrutiny.

Data science teams work with a machine — a statistical engine — to unearth new correlations in mountains of potential connections. At Kabbage, for example, the process proposes a potential correlation between small business activity and credit risk. For example, longitudinal patterns in bank account balances.

Once research identifies a potential correlation, data scientists run rigorous regressions to observe the data attributes and build robust training models in search of viable explanations for the link. The resulting model is run past a committee, legal counsel and validated by the sponsor bank to make sure it meets logical and ethical standards as well as the strict legal requirements for credit underwriting. Then it’s tested again and again, and again, before being deployed across the platform.

The laborious, very human system of testing and supervised approvals is still called AI today. There’s no need to worry about autonomous, freewheeling, terminator, doomsday abstractions in credit underwriting today, but there should be concerns about definitions.

Specific industries employing AI and machine learning have definitions and a grasp of potential risks but the same language is not being used across industries and in government. As the House Financial Services Committee Task Force on Artificial Intelligence prepares for the June 26 hearing, I urge Congress to support the scientists already hard at work to help discuss the future of AI regulation.

Executive Order 13859 directs the National Institute of Standards and Technology to research “Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.” A multistage, transparent research process should be part of every data science team.

Stewards of AI already self-audit, share data and can cooperate to validate their result. Society should look to foster the scientific review process, and examine results and outcomes. Regulators and lawmakers should encourage testing and learning in regulatory sandboxes — instead of provoking anecdotal fear and AI panic — especially as AI and machine learning continue to develop in the current state.

AI should be seen less as “intelligence” and more as “automated insights.” Considering these risks as industry guidelines are created for AI is critical, but those rules won’t be created by distant officials who struggle to properly define it.

For reprint and licensing requests for this article, click here.
Artificial intelligence Fintech regulations Machine learning Data privacy Data privacy rules Underwriting Digital banking Credit scores Fraud detection Fraud prevention Senate Banking Committee Kabbage
MORE FROM ASSET SECURITIZATION REPORT