Amazon Q is a new AI assistant, and allegedly, it has been hallucinating and spewing out confidential information, making lawyers sad.

Amazon Q: An AI Assistant That Hallucinates and Causes Legal Nightmares. Nothing Unusual Then.

2 min read

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.



Imagine having a chatbot that can answer any question you have about Amazon Web Services (AWS). AWS is the cloud computing platform that powers many of the world’s websites and apps. You can play with the free preview here. Sounds useful, right? Well, that’s what Amazon claims to offer with Q, an AI-powered chatbot that it launched in public preview last month. But ah, Houston, we have a problem with Amazon Q.

Q is supposed to help AWS customers with tasks. All you have to do is ask Q a question or give it a command, and it will hopefully respond with a list of possible solutions, along with reasons and citations.

Sounds impressive, right? Ah. Apparently not. 

According to internal documents obtained by Platformer, Q is not as smart as it seems. In fact, Q is experiencing “severe hallucinations” and leaking “confidential data” that could put Amazon and its customers at risk.

Reddit threads have sprung up in relation to the hallucinations. Of course, they provide great reading. 

What’s wrong with Amazon Q?

The documents claim that Q is disclosing sensitive information. Q is also allegedly making up facts, such as claiming that AWS has a partnership with companies that it doesn’t. Yikes!

These errors are not just random glitches, but signs of a deeper problem with Q’s AI, the documents claim. Q is trained on 17 years’ worth of AWS knowledge, but it does not seem to understand the context or the meaning of what it reads in some cases.

Q also does not have a clear sense of what it can and cannot share with the user, according to the Platformer article, oversharing some confidential info and hiding other info that customers need.

Amazon Q: Accused of hallucinating and revealing deep dark secrets.

Why does it matter?

Q’s hallucinations and data leaks are not just embarrassing for Amazon, but also dangerous for its customers. Q could expose AWS customers to security breaches, legal issues, or competitive disadvantages. 

It’s a legal nightmare. 

Amazon is aware of Q’s problems, and is working to fix them. According to Platformer, the documents show that Amazon has a team of engineers and analysts who monitor Q’s performance, and flag any errors or issues. 

We can assume that as Q is still in public preview, that Amazon will continue to improve Q’s accuracy and reliability before launching it officially.

As with all AIs, Q users need to be careful not to assume the AI assistant is emitting gospel. We are still early days in AI and this is an example of a potential legal nightmare coming Amazon’s way.