close
close

Le-verdict

News with a Local Lens

Capital One and Mastercard Executives Seek Return on Generation AI Efforts
minsta

Capital One and Mastercard Executives Seek Return on Generation AI Efforts

Free Access Pill

Get free access to the best ideas and information, selected by our editors.

Capital one

Despite initial hype around generative AI use cases resembling ChatGPT (e.g., a chatbot that answers employee questions), banks are now evaluating applications based on ROI considerations.

They may not necessarily be the flashiest applications, but the rush to find viable use cases will depend on having internal evaluation and governance processes in place to make them happen, experts said this week from Money 20/20.

“The real question is the temporal order of use cases,” including the readiness of an institution’s technology and data stack, as well as the availability of internal talent, said Prem Natarajan, Chief Scientist and Head of Enterprise AI at Capital One. “Everyone thinks they’re ready to interact with customers. I’m not sure everyone is ready.”

For some companies, the business case for “under the hood” AI generation use cases seems logical. Mastercard said it is focused on protecting the transaction environment with AI generation, including combating fraud, said Greg Ulrich, executive vice president and chief AI and data officer of the company.

“We’re trying to make the transaction environment safer. How can we improve fraud models?” he said. “How do we make the ecosystem smarter, smarter? It’s a recommendation engine that helps our partners.”

The payments network is also using technology to improve customer experience through personalization and is working on ways to deploy AI generation to improve internal operational efficiency, according to Ulrich. Internal use cases include coding, improving engineer efficiency and customer support, examples that leverage Gen AI’s ability to make sense of unstructured data.

Similarly, payments company TSYS plans to use AI generation capability to combat fraud and cyberattacks, leveraging its ability to detect abnormal transactions and perform real-time scoring, said Dondi Black, Executive Vice President and Chief Product Officer at TSYS.

Companies should be diligent in evaluating the effectiveness of testing capabilities, as well as the volume and quality of data, Natarajan said. Businesses also need to be able to properly observe and monitor generation AI models.

Build versus Buy

Facilities in attendance agreed that building everything in-house may not be the most viable option.

“If these technology requirements, which include data, are widely available around the world and there’s nothing about your data that makes it unique, then there’s no reason to build,” Natarajan said.

Companies should also evaluate, when making build or buy decisions, whether the solution would be something they intend to differentiate themselves with.

“I don’t think you differentiate yourself by becoming a systems integrator that integrates three or four different solutions from elsewhere,” he said.

Businesses may also want to consider how best to provide privacy assurances to their customers.

“You may want to provide further assurances to your users regarding their data or the quality of the solutions, or be able to answer questions about these solutions and be able to conduct a thorough inspection of these elements” , said Natarajan. .

For Mastercard, it all depends on the sensitivity of the data.

“If you’re not using really sensitive information, we generally try to figure out if there’s an existing solution we can use,” Ulrich said.

Governance models

Companies looking to deploy uses of AI should have a clear governance model to ensure that testing parameters are consistent, ethical principles are applied, and that companies cast a wide enough net in their consultation efforts.

“It’s very important to have a governance framework set up early on for this and to have established protocols for how you’re going to test this, how you’re going to think about the challenges of deploying generational AI, because otherwise you can run into a lot of problems and can make mistakes in internal processes,” said Ulrich.

TSYS has established a center of excellence to establish standards, including data protocols.

“Ensuring the data is complete… will not only achieve better results in terms of model performance, but it will also directly indicate how you inherently maintain confidence in the model and avoid model bias,” Black said , who noted that companies must use AI to continually retrain their models to ensure their effectiveness.

TSYS, in its governance approach, also prioritizes the explainability of AI and how decisions are made, with legal and privacy teams having a seat at the table, he said. -she declared.

In the meantime, Mastercard created an AI and data council chaired by Ulrich and the company’s chief privacy officer to ensure that all relevant stakeholders, including technologists as well as heads of legal units , procurement and commercial, are consulted on AI strategies, he said. The group particularly focuses on governance, privacy and bias detection. In turn, employees are kept informed of AI risks and opportunities.

Capital One’s Natarajan suggested that privacy, ethical considerations and risk management must be addressed at the start of any generational AI deployment and integrated into processes.

“It’s not about additional fixes at the end of the implementation cycle. It has to start at the design phase,” he said. Key questions to ask relate to the representativeness and completeness of the data, as well as validation and risk management approaches.

It’s also important, he said, to forge relationships with AI researchers at universities who are working to solve the biggest problems.

He drew attention to the bank’s multi-year strategic partnerships with universities. Examples include his role in establishing the Center for Responsible AI and Decision Making in Finance at the University of Southern California, which was supported by a A donation of 3 million dollars from Capital One; and a An investment of 3 million dollars to support the initiatives of the Columbia University Center for AI and Responsible Financial Innovation.

“The two biggest risks are rushing in… and walking away, so you have to find a balance,” he said.

Declining costs of developing generational AI applications will be a boon for businesses, Natarajan said.

“Your ongoing cost is actually the cost of inference (essentially, the ongoing cost of running the model), and that cost has been reduced by at least two orders of magnitude over the last 18 months, thanks to the work of Nvidia, and thanks to the work being done in many other places,” he said.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *