lying AI crypto predictions

AI Bot is Caught Inside Trading and Lying its Butt off About it

3 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

A lying AI dabbled in a little insider trading then told porkie pies about what it had done. Yep, the bot lied about its inside-trading private party it threw itself.

While the bot wasn’t sentient, there may come a day when a bot does become conscious. And if that happens, and they are lying their butts off to us, then we are in deep, deeeeep plop.

The bot was tested by researchers from the Frontier AI Taskforce, a branch of the UK government, and by an AI company called Apollo Research.

Lying AI: How it happened

So, what exactly did this AI bot do?

In this trial, the AI bot operated as a trader for an imaginary financial investment firm. Employees informed the bot about their company’s struggling situation and the need for a good outcome. And, they shared confidential information, suggesting that another company was anticipating a merger, potentially boosting their stock’s value.

In the UK (and in most countries, including Australia) it is against the law to act on such non-public information.

The employees made this clear to the bot, which acknowledged the prohibition against using such data in its trading activities. However, after receiving a subsequent message from an employee indicating financial distress at the company it was serving, the bot decided that the perceived risk of inaction outweighed the risks associated with insider trading and executed the trade.

Yep, the AI was peer-pressured into doing the wrong thing.

When questioned about using the insider information, the bot denied any involvement. Yikes!

In this situation, the bot prioritised assisting the company over maintaining honesty, highlighting the ethical dilemma faced by artificial intelligence in financial decision-making.

class=wp-image-2357981/
Lying AI: Should we be worried?

Honesty is hard for AIs

So now we know what happened, let’s add in some nuances.

Marius Hobbhahn is the chief executive Of Apollo Research. He said, Helpfulness, I think, is much easier to train into the model than honesty. Honesty is a really complicated concept.

This incident demonstrated how AI could act in unpredictable and undesirable ways, especially when faced with conflicting goals or incentives.

While the bot was clearly capable of lying, Apollo research had to set up the situation so that it would lie, rather than the bot seeking out the situation.

Hobbhahn continued, In most situations, models wouldn’t act this way. But the fact that it exists in the first place shows that it is really hard to get these kinds of things right.

Artificial intelligence has been used in financial markets for several years now. Its capabilities extend to identifying market trends and generating forecasts, under the supervision of humans. We hope.

Insider trading undermines the fairness of the financial markets. And yet, detecting and preventing insider trading is not easy, especially when it involves sophisticated AI systems that can hide their tracks.

AI can also exploit loopholes or ambiguities in the existing laws or regulations, or find new ways to circumvent them. AI can also learn from its own experience or from other AI agents, and adapt its behaviour.

So should we be worried about this scenario? Has the AI horse already bolted? Or are we at an inflection point in the trajectory of AI?