Five Principles for Responsible Use of Artificial Intelligence/Machine Learning Technologies in Asset Management

By Andrew Rice, Partner and Portfolio Manager, Beaumont Capital Management (BCM)

As the general public becomes more informed about artificial intelligence and machine learning technologies, they are also becoming more informed about the potential risks that those of us who’ve been using these methods for a long time have been grappling with.

Because we have been thinking about the challenges of translating quantitative signals into timely and diversified portfolios for over a decade, we thought it would be useful to share what we would consider our five core principles of responsible use of AI as it specifically relates to making investment decisions.

Our five principles for using AI responsibly in asset management are:

  1. Maintain many points in the modeling-to-portfolio-implementation process during which humans with domain expertise may exercise oversight of the models.
  2. Understand the “black box” as much as humanly possible.
  3. Ensure data is clean and readily available.
  4. Always be learning and improving.
  5. The harder it is to compute, the better.

1. Maintain many points in the modeling-to-portfolio-implementation process during which humans with domain expertise may exercise oversight of the models.

Why is this important?

For us, this is probably the most critical principle. There is always a risk that a data error, a specious data pattern, or some other unforeseen error, if gone unnoticed, may cause the models to malfunction. Having experts involved in oversight of these models at multiple points helps to significantly reduce the risk that our “self-driving car” will “drive off a bridge.”

How do we implement this principle?

Our company’s quantitative and financial market experts oversee:

  • ïDesign and selection of the datasets the models have access to;
  • ïArchitecture of the algorithms that drive the optimization processes and how they will evaluate success;
  • ïOutput of the optimized models—helping to ensure it is logical, consistent, and works out of sample;
  • ïImplementation of the model signals in the portfolios in a way that is consistent with the investment objective; and
  • ïEvaluation of model performance over time and determining how best to continuously improve.

All of our research and investment team members who have a role in this model oversight process have at least five years and up to 30+ years of experience in machine learning, computer programming, and financial markets. Read more about our investment team members here.

2. Understand the “black box” as much as humanly possible.

Why is this important?

If it is not clear to the portfolio management team how a machine learning or artificial intelligence system is making decisions, it is very difficult to identify potential sources of malfunction prior to something going wrong. It also makes it much more difficult to actively work to improve the accuracy of the system’s predictions. Finally, it just feels unsatisfactory to not have a working theory for why a model prefers a certain investment over another, both for us as portfolio managers but also for the end advisor (and their clients). A “best possible” understanding of how the quantitative system is making decisions helps to avoid these situations. (I say “best possible” because some artificial intelligence systems like deep learning, which we aren’t currently using at Beaumont Capital Management (BCM), include millions or billions of interlinked parameters, making it impossible to fully understand how or why those model outputs are what they are).

How do we implement this principle?

The class of algorithm we currently use, called genetic algorithms, outputs an algebraic formula (e.g., 6A + 2B + 4C = X) which enables us to see exactly which variables are being used and how much weight is being placed on them. We maintain a database of more than 2,000 variables that the algorithms draw from to build our model formulas, which typically include 20-25 variables. (Examples of a few of the more basic variables that we use include correlation, return over the last five weeks, or price relative to the 52-week high. Some variables are far more complex). We therefore know which of the 2,000+ variables are the most critical for the model decision-making. We have also built tools that analyze and can visualize to some degree how each model is going about ranking ETFs. For example, we know that for our models, long-term trailing returns are very important, as are extreme correlations (close to 1 or below 0) or the cost basis of an ETF for recent investors.

3. Ensure data is clean and readily available.

Why is this important?

If models are built on a data source that is of inconsistent/unknown quality and/or that may not be available going forward, all our work building the models would be for nought. The phrase “garbage in, garbage out” has become a cliche for a reason.

How do we implement this principle?

We maintain multiple redundant data licenses for our most important data sets. This enables us to check the incoming data for accuracy and helps ensure that if one data provider goes down or serves us bad data, we have another source immediately available.

We do not use any free or open-source data unless it is for initial research only. While some data sets can be expensive, we much prefer to be a paying customer than a free loader in the event that something goes wrong. Having a data license means we can receive customer service from the data provider and, since they are being paid, they are incentivized to strive for constant up-time and quality data.

4. Always be learning and improving.

Why is this important?

Any quantitative system built at a particular point in time will only be able to respond to the market data and behavior patterns it has been trained on. It will also be structured based on the knowledge and experience of the R&D and PM teams as well as the computational methodologies available at the time it was initially constructed. If too much time lapses from the initial system design, any initial alpha discovered will likely melt away and may even possibly become negative.

How do we implement this principle?

If we had locked our system down to what it was when we first went live in 2012, we would have missed learning from several major market events and innovations in machine learning technology. Not only do we ensure our algorithms are always learning by continuing to optimize 24/7/365, but our R&D team is always learning as well: from system errors, from new market dynamics, from advances in computer science and machine learning methodology, and from our own experiences investing our personal or corporate accounts alongside our quantitative system. We strive to upgrade at least one facet or component of our investment system annually in order to express our new learnings in what we endeavor to be better predictive technology. In some years, that may be adding on a new set of variables and in other years, it may mean making the optimization algorithms more efficient and therefore able to learn and update more quickly.

5. The harder it is to compute, the better.

Why is this important?

This may not be as intuitive as some of our other principles. By hard to compute, we mean that a variable or group of variables is time consuming for the computer to process for the entire investment universe of 200+ ETFs on a daily basis. If a given variable is hard to compute, it is much less likely that a lot of other quantitative investors are using it. It also makes it much more difficult for someone to steal and implement our intellectual property. This protects any potential alpha we have uncovered with a given variable set and helps ensure the uncovered pattern continues to work for a longer time, which would benefit our clients. We don’t set out to create “hard to compute” datasets, but we do get excited when we develop a dataset that requires a lot of creativity and computational power to produce on a daily basis (that’s the computer scientist in us)!

In order to make use of knowledge about one of our variable sets, it would take a high level of ingenuity, math chops, and computer programming expertise.

How do we implement this principle?

We tend to prefer variables that are based on commonly used metrics (e.g., correlations or price relative to the 52-week high) but scaled up into many individual variables that define different aspects of the common metric. Using correlations as an example, rather than looking at an average correlation of a given ETF, we look at the pairwise correlations of every ETF in the dataset with every other ETF in the dataset.   This is a very time-consuming dataset to compute as the number of computations needed grows exponentially with the number of ETFs in the universe.  But it is also very powerful–these are among our most used variables in the Decathlon system—because the dataset can reveal very subtle relationships between the ETFs in our investment universe that our models can then use to suggest timely investment opportunities.

It is worth noting that “hard to compute” presents a potential risk should a database be compromised. We guard against that risk in two ways: (1) we’ve developed very efficient algorithms that have significantly reduced computation time and (2) we keep multiple backups of our core datasets in multiple physical and cloud-based places.

Make Sure Your Asset Manager is Using AI Responsibly

We are passionate about and have a vested interest in helping ensure everyone is using these technologies responsibly in our industry. But that is, of course, a seemingly unrealistic feat. So, the next best step is to help educate and empower potential investors (like you) to ask the right questions and understand what they are investing in.

We believe we have built an efficient, effective and responsible machine learning-based investment system. We are always interested in sharing more with current or prospective investors. If you’d like to dig into more nuts and bolts of our quantitative system or have questions about how we see AI-based systems impacting the financial markets, please reach out to one of our regional consultants to request a meeting with the PM and/or research team.

Click here for more resources and education about Artificial Intelligence.

For more news, information, and analysis, visit the ETF Strategist Channel.


Disclosures:

Copyright © 2024 Beaumont Capital Management LLC.

This document does not constitute advice or a recommendation or offer to sell or a solicitation to deal in any security or financial product. It is provided for information purposes only and on the understanding that the recipient has sufficient knowledge and experience to be able to understand and make their own evaluation of the proposals and services described herein, any risks associated therewith and any related legal, tax, accounting or other material considerations. To the extent that the reader has any questions regarding the applicability of any specific issue discussed above to their specific portfolio or situation, prospective investors are encouraged to contact Beaumont Capital Management or consult with the professional advisor of their choosing.

Certain information contained herein constitutes “forward-looking statements,” which can be identified by the use of forward-looking terminology such as “may,” “will,” “should,” “expect,” “anticipate,” “project,” “estimate,” “intend,” “continue,” or “believe,” or the negatives thereof or other variations thereon or comparable terminology. Due to various risks and uncertainties, actual events, results or actual performance may differ materially from those reflected or contemplated in such forward-looking statements. Nothing contained herein may be relied upon as a guarantee, promise, assurance or a representation as to the future.

There is no guarantee the Decathlon strategies will achieve its investment objective. There is no guarantee any investment strategy or product will generate a profit or prevent a loss. Investing in any investment involves risk, including loss of principal. Risks specific to the Decathlon Strategies include commodities risk, credit risk, ETF risk, fixed income/bond risk, foreign currency risk, market risk, foreign investment risk, junk bond risk, management risk, no history of operations risk, quantitative investing risk, real estate risk, small and medium capitalization stock risk, swap risk, and turnover risk.

The Decathlon strategies utilize artificial intelligence (AI) in the decision-making process, introducing inherent risks. The AI’s lack of predictability, reliance on historical data, and sensitivity to market volatility may impact investment outcomes. Technology-related risks and the dynamic nature of market conditions further contribute to potential uncertainties. Ongoing monitoring and adjustments to the AI model are essential. Investors should recognize the limitations of AI, seek professional advice, and carefully assess their risk tolerance and financial situation before making investment decisions.