AI Targeting Compliance – Including Advisor Communications – Is Evolving | ETF Trends

The risks and benefits of using artificial intelligence (AI) to invest currently are a subject of intense debate. However, at the same time, firms are testing and implementing new ways to enhance their compliance systems with the technology.

Financial advisors, particularly at large wealth management firms, already use new AI software that helps streamline and automate compliance efforts. And some advisors at smaller shops say they are intrigued by the technological advancements, including ones the future may bring to AI. 

One advisor said she already uses AI-powered solutions to help clients find the right socially responsible investments. However, she also expressed interest in using AI to help with compliance-related tasks. 

“I have a lot to keep track of. I don’t want to slip on the compliance side,” said Lindy Venustus, CEO and founder of Create Financial Planning. 

“I’m totally responsible for everything,” she added. Venustus expressed interest in AI tech that could help her keep track of day-to-day tasks. However, it would have to do so without impacting the security of her clients’ personal information. 

She currently uses compliance software that helps her monitor clients’ billing and reminds her to renew her FINRA license. But it’s clunky and not automated, she said. 

Mona Vernon, the head of Fidelity Labs, a fintech business incubator at Fidelity Investments, wrote in a recent LinkedIn post that AI is changing the way that compliance is done within the wealth management industry. 

Last year, Fidelity launched Saifr, an AI technology. It “employs computational linguistic and transformer-based techniques that ‘understand’ content as it is created,” Vernon wrote. “It then highlights potential marketing and compliance risks, explains why the risk was flagged, proposes alternative language, and can suggest relevant disclosures. Thus, highly trained compliance associates can focus on the more complex issues that require human judgment.”

Compliance Solutions for Advisor-Client Interactions

Kenneth Chavis IV is a senior wealth counselor at Versant Capital Management. He said that AI-powered solutions to help flag advisor communications as noncompliant have been around for a few years. But he noted he could certainly see how advancements in the technology could prompt future changes in advisor-client interactions.

“I’m sure the application of using AI for compliance levels will develop,” Chavis said. “Especially for the big wealth firms with many advisors across the country or worldwide. I could see how email correspondence could be flagged before that advisor is even able to send that email to a client,” he added. 

“Frankly, there’s always risk in any kind of customer interaction, but specifically if you’re providing advice within an email application,” he added. “Regarding a consumer or investor that (an advisor) is helping, that customer can file a complaint or take other legal action if that communication caused financial damage.” 

Large and small wealth firms already use AI software on the marketing side that flags communications for compliance, Chavis said.

In an August research paper, Fidelity noted that AI can help enhance cybersecurity and fraud detection efforts in wealth management. 

Today, many anomaly detection systems already use machine learning (a subset of AI) to help pinpoint “unusual events in contrast to historical data, typically in sensitive areas such as money movement,” the paper said. “Many of these systems use machine learning, and recently, Fidelity began offering this capability to client firms through Wealthscape Analytics℠ to help them identify potential assets and relationships at risk, based on client activities.” 

Other AI-powered software, like Presults, focuses on helping financial advisors meet SEC and FINRA compliance requirements related to archiving emails, text messages, social media posts, and changes to websites. 

Growing Scrutiny From Regulators

AI continues to gain ground within the wealth industry. But at the same time, regulators are increasing their overall scrutiny of the technology. They are concerned about its potential negative consequences within financial services. 

Earlier this year, JPMorgan Chase restricted employees’ use of ChatGPT over compliance concerns and its existing policies around third-party software, according to reports by CNN and Axios. For what it’s worth, the bank also has cloud computing software in development that some compare to ChatGPT. JPMorgan filed a trademark application for the IndexGPT product back in May. 

Other large banks joined JPMorgan in restricting their employees’ ChatGPT use at the time. They include Bank of America, Deutsche Bank, Goldman Sachs, and Wells Fargo.

The SEC has also had its eye on the long-term implications of mass AI adoption in finance. 

At a July event, SEC Chair Gary Gensler expressed concerns about relying too much on a handful of AI platforms. He noted that AI could play a “central role” in a future financial crisis as a result.

“The possibility of one or even a small number of AI platforms dominating raises issues with regard to financial stability,” Gensler said at the National Press Club event. 

“AI may heighten financial fragility, as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator. This could encourage monocultures. It also could exacerbate the inherent network interconnectedness of the global financial system. Thus, AI may play a central role in the after-action reports of a future financial crisis,” Gensler continued.

He also noted that the SEC was “technology neutral.” He added that, despite challenges raised by AI, the agency “could benefit from staff making greater use of AI in their market surveillance, disclosure review, exams, enforcement, and economic analysis.”

Pending Rules

In late July, the SEC proposed new rules related to AI and other predictive data analytics used by broker-dealers and investment advisors. 

“Given the scalability of these technologies and the potential for firms to reach a broad audience at a rapid speed, any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible,” the SEC’s release said.

The proposed rules would require firms to determine if using those technologies in investor interactions creates a conflict of interest. The requirements mean firms would have to remove those conflicts and institute written policies and procedures to achieve compliance. 

For more news, information, and strategy, visit the Artificial Intelligence Channel.