podcast

Challenges and Risks in Generative AI: Considerations for Investors

Published:23/02/2024

By Kate Tong, Analyst, Portfolio Research, ESG, TDAM



Investment Insights +
clock 5 Minutes =
New Thinking

Since OpenAI's ChatGPT went viral in late 2022 for its unprecedented ability to engage in human-like conversations and provide articulate responses in wide-ranging domains of knowledge, several competitors have begun introducing their own iterations of the technology. This type of AI technology, known as generative AI, is based on large language models that are trained on massive amounts of data, which could include text, images or other media. The models identify the patterns and structures of the training data and then generate new content that has similar characteristics based on user prompts.

There are various benefits to incorporating generative AI in a business – process improvements, cost reduction and value creation, to name a few. Leveraging these opportunities, companies across different sectors have already begun testing and implementing generative AI tools. Examples range from financial institutions deploying chatbots trained on internal databases to provide financial advice to customers to healthcare institutions automating the generation of medical documentation based on conversations between patients and physicians. Across industries, companies are also incorporating generative AI tools in marketing, customer service and product development.

As such, investors have to pay attention not only to large tech companies that are building the foundational models, but also to companies that are starting to incorporate generative AI tools into their business. As with most new technologies, there are potential risks that should be adequately considered and safeguarded against before widespread deployment. Regulation will be important in helping reduce these risks. But because the development of regulation occurs at a much slower pace than the development and application of AI, investors should actively consider its risks and seek stewardship opportunities in companies involved in generative AI to address these risks.

Challenges and Risks in Generative AI

Generative AI models have various known issues. These models have the tendency to "hallucinate," generating false outputs that are not justified by the training data and presenting them as a fact. These errors can be caused by various factors, such as improper model architecture or noise and divergences in the training data. Opaqueness about how model outcomes are generated is also an issue. With billions to trillions of model parameters that determine the probabilities of each part of its response, it is exceedingly difficult to map model outputs to the source data, including in cases of hallucination.

In addition, if the training data contains societal prejudices or if the algorithm design is influenced by human biases, the model may learn and propagate these biases in its outputs. Enterprise applications could also be vulnerable to data privacy issues and cybersecurity threats. This includes leakage of sensitive information within the training data if the model is customer-/public-facing, usage of personal or sensitive data in model training that may have needed explicit consent to use, as well as malicious attacks from hackers that aim to manipulate model outputs. These issues give rise to various legal and reputational risks, the scale of which depends on the criticality of the use case and the company's industry. For example, the financial and healthcare industries may be subject to severe consequences if problems do arise, due to the high-stakes nature of these industries.

Sample Use Cases in the Financial Industry

In financial advisory use cases, model hallucinations could give inappropriate advice or offer the wrong product to undiscerning clients, which could undermine public trust in AI systems and the financial institutions using them. Opaqueness about how model outcomes are generated is also a key issue for financial institutions, as these institutions are required to be able to explain their decisions internally and to external stakeholders. Considering all this, it is best practice to implement a degree of separation between direct model outputs and the customer, where internal staff could be trained to recognize potential errors and inconsistencies in model outputs and assume ultimate responsibility for the decision-making process.

Generative AI could also offer a quick and low-cost way for financial institutions to profile their clients for marketing campaigns, risk management and identification of suspicious transactions. However, overreliance on generative AI profiling could violate anti-discrimination laws due to potential bias embedded within the models. Appropriate human judgment will need to complement generative AI models that perform client profiling. Financial institutions will also need to have strong data privacy policies and robust cybersecurity measures to address generative AI's risks to their sensitive client information and proprietary data.

Questions for Investors to Consider

In view of all these issues and risks, below are questions investors should consider when assessing companies employing generative AI tools:

  • What are the risk-mitigating mechanisms and/or circumstances? Solutions include having trained internal staff act as an intermediary between direct model outputs and the customer; working to understand potential biases in the training data and address them in model design; regular and proactive monitoring of model output to promptly identify and address any signs of hallucinations; implementing robust cybersecurity measures; etc.
  • What is being done to enhance model performance? Solutions include ensuring that training data is high quality, accurate and up to date; implementing iterative feedback loops to refine and improve model performance; etc.
  • Are there any transparency and oversight on ethical AI principles? This pertains to providing transparency on data sourcing and data privacy concerns; defining clear policies and procedures to ensure compliance with ethical standards and emerging regulations; outlining the roles and responsibilities of individuals involved in the development, operation and oversight of the generative AI model; etc.

The information contained herein has been provided by TD Asset Management Inc. and is for information purposes only. The information has been drawn from sources believed to be reliable. The information does not provide financial, legal, tax or investment advice. Particular investment, tax, or trading strategies should be evaluated relative to each individual's objectives and risk tolerance.

This material is not an offer to any person in any jurisdiction where unlawful or unauthorized. These materials have not been reviewed by and are not registered with any securities or other regulatory authority in jurisdictions where we operate.

Any general discussion or opinions contained within these materials regarding securities or market conditions represent our view or the view of the source cited. Unless otherwise indicated, such view is as of the date noted and is subject to change. Information about the portfolio holdings, asset allocation or diversification is historical and is subject to change.

Certain statements in this document may contain forward-looking statements (“FLS”) that are predictive in nature and may include words such as “expects”, “anticipates”, “intends”, “believes”, “estimates” and similar forward-looking expressions or negative versions thereof. FLS are based on current expectations and projections about future general economic, political and relevant market factors, such as interest and foreign exchange rates, equity and capital markets, the general business environment, assuming no changes to tax or other laws or government regulation or catastrophic events. Expectations and projections about future events are inherently subject to risks and uncertainties, which may be unforeseeable. Such expectations and projections may be incorrect in the future. FLS are not guarantees of future performance. Actual events could differ materially from those expressed or implied in any FLS. A number of important factors including those factors set out above can contribute to these digressions. You should avoid placing any reliance on FLS

TD Asset Management Inc. is a wholly-owned subsidiary of The Toronto-Dominion Bank.

® The TD logo and other TD trademarks are the property of The Toronto-Dominion Bank or its subsidiaries.

The statements and opinions contained herein are those of Kate Tong and do not necessarily reflect the opinions of, and are not specifically endorsed by, TD Asset Management Inc.


TDAM Connections at a Glance:

You might also be interested in:

podcast
Tune in Now

Tune in Now

TDAM Talks Podcast

podcast
Here's what's new

Here's what's new

Thought Leadership

podcast
Stay Current

Stay Current

Market Commentaries

Back to top Top