Featured Article : 'AI Washing' - Crackdown | Digital Network Solutions
Need support? 01603 778255

Featured Article : 'AI Washing' - Crackdown

Written by: Paul | March 27th, 2024

The US investment regulator, the Securities and Exchange Commission (SEC), has dished out penalties totalling $400,000 to two investment companies who made misleading claims about how they used AI, a practice dubbed ‘AI Washing’. 

What Is AI Washing? 

The term ‘AI washing’ (as used by the investment regulator in this case) refers to the practice of making unsubstantiated or misleading claims about the intelligence or capabilities of a technology product, system, or service in order to give it the appearance of being more advanced (or artificially intelligent) than it actually is.  

For example, this can involve overstating the role of AI in products or exaggerating the sophistication of the technology, with the goal often being to attract attention, investment, or market-share by capitalising on the hype and interest surrounding AI technologies. 

What Happened? 

In this case, two investment advice companies, Delphia (USA) Inc. and Global Predictions Inc., were judged by the SEC to have made false and misleading statements about their purported use of artificial intelligence (AI). 

Delphia 

For example, in the case of Toronto-based Delphia (USA) Inc, the SEC said that from 2019 to 2023, the firm made “false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning that incorporated client data in its investment process”. Delphia claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.”  Following the SEC’s investigation, the SEC concluded that Delphia’s statements were false and misleading because it didn’t have the AI and machine learning capabilities that it claimed. Delphia was also charged by the SEC with violating the Marketing Rule, which (among other things) prohibits a registered investment adviser from disseminating any advertisement that includes any untrue statement of material fact.  

Delphia neither confirmed nor denied the SEC’s charges but agreed to pay a substantial civil penalty of $225,000. 

Global Predictions 

In the case of San Franciso-based Global Predictions, the SEC says it made false and misleading claims in 2023 on its website and on social media about its purported use of AI. An example cited by the SEC is that Global Predictions falsely claimed to be the “first regulated AI financial advisor” and misrepresented that its platform provided “expert AI-driven forecasts.” Like Delphia, Global Predictions was also found to have violated the Marketing Rule, falsely claiming that it offered tax-loss harvesting services and included an impermissible liability hedge clause in its advisory contract, among other securities law violations. 

Following the SEC’s judgement, Global Predictions also neither confirmed nor denied it and agreed to pay a civil penalty of $175,000. 

Investor Alert Issued

The cases of the two investment firms prompted the SEC’s Office of Investor Education and Advocacy to issue a joint ‘Investor Alert’ with the North American Securities Administrators Association (NASAA), and the Financial Industry Regulatory Authority (FINRA) about artificial intelligence and investment fraud. 

In the alert, the regulators highlighted the need to “make investors aware of the increase of investment frauds involving the purported use of artificial intelligence (AI) and other emerging technologies.”   

The alert flagged up how “scammers are running investment schemes that seek to leverage the popularity of AI. Be wary of claims — even from registered firms and professionals — that AI can guarantee amazing investment returns” using “unrealistic claims like, ‘Our proprietary AI trading system can’t lose!’ or ‘Use AI to Pick Guaranteed Stock Winners!” 

Beware ‘Pump-and-Dump’ Schemes 

In the alert, the regulators also warned about how “bad actors might use catchy AI-related buzzwords and make claims that their companies or business strategies guarantee huge gains” and how claims about a public company’s products and services relating to AI also might be part of a pump-and-dump scheme. This is a scheme where scammers falsely present an exaggerated view of a company’s stock through misleading positive information online, causing its price to rise as investors rush to buy. The scammers then sell their shares at this inflated price. Once they’ve made their profit and stop promoting the stock, its price crashes, leaving other investors with significant losses. 

AI Deepfake Warning 

The regulators also warned of how AI-enabled technology is being used to scam investors using “deepfake” video and audio. Examples of this highlighted by the regulators include: 

– Using audio to try to lure older investors into thinking a grandchild is in financial distress and in need of money.  

– Scammers using deepfake videos to imitate the CEO of a company announcing false news in an attempt to manipulate the price of a stock. 

– Scammers using AI technology to produce realistic-looking websites or marketing materials to promote fake investments or fraudulent schemes. 

– Bad actors even impersonating SEC staff and other government officials.   

The regulators also highlight high scammers now often use celebrity endorsements (as they have in the UK using Martin Lewis’s name and image without consent). The SEC in the US says making an investment decision just because someone famous says a product or service is a good investment is never a good idea. 

Don’t Just Rely On AI-Generated Information For Investments 

In the alert, the US regulators also warn against relying solely on AI-generated information in making investment decisions, e.g. to predict changes in the stock market’s direction or the price of a security. They highlight how AI-generated information might rely on data that is inaccurate, incomplete, or misleading, or how it could be based on false or outdated information about financial, political, or other news events. Also, it could draw from false or misleading information.  

Advice 

The alert offers plenty of advice on how to avoid falling victim to AI-based financial and investment scams with the overriding message being that “Investment claims that sound too good to be true usually are.” The regulators stress the importance of checking credentials and claims, working with registered professionals, and making use of the regulators. 

What Does This Mean For Your Business? 

Just as a lack of knowledge about cryptocurrencies has been exploited by fraudsters in Bitcoin scams, regulators are now keen to highlight how a lack of knowledge about AI and its capabilities are now being exploited by bad actors in a similar way.

AI may have many obvious benefits, but the message here, as highlighted by the much-publicised substantial fines given to the two investment companies and the alert issued by regulators to beware ‘too good to be true’ AI claims. The regulators have highlighted how AI is now being exploited for bad purposes in a number of different ways. These include deepfakes and pump-and-dump schemes, via different channels, all of which are designed to exploit the emotions and aspirations of investors, and to build trust to the point where they suspend any critical analysis of what they’re seeing and reading and react impulsively.

With generative AI (e.g. AI images, videos, and AI audio cloning) now becoming so much more realistic and advanced to the point where governments in a key election year are issuing warnings and AI models are being limited on what they can respond to (refer Gemini with election questions), the warning signs are there for financial investors. This story also serves as an example to companies to be very careful about how they represent their usage of AI, what message this gives to customers, and whether claims can be substantiated. It’s likely that we’ll see much more ‘AI washing’ in the near future.