canada
A chatbot has no intentions or independent will. A chatbot is a tool of the company that the company uses to make representations, just as the company uses any other part of its website.
A company should be careful about the repesentations it allows to be communicated to customers, including via a chatbot.
See e.g. Moffatt v. Air Canada, 2024 BCCRT 149. Air Canada represented to a customer, via a chatbot, that the customer could apply for a bereavement fare retroactively. Air Canada attempted to renege from this representation. The British Columbia Civil Resolution Tribunal found that to be negligent misrepresentation and required Air Canada repay the customer the difference between what the customer actually paid and what a bereavement fare would have been.
If you're asking specifically whether the mere representation that the chatbot is a human can be the basis for a claim, that is a narrower question.
If a company represents to a customer that the interaction they're having with the company is with a live human being, when in fact, the interaction is with a chatbot, that could lead to:
- a claim in negligent misrepresentation, but only if the customer's reasonable reliance on the representation of human interaction led to damages;
- a claim for fraud if the misrepresentation was intentional by the company, and if the misrepresentation was material and caused damages;
- a claim of false advertising under the Competition Act if the representation is "misleading in a material respect" — to be "material" the misrepresentation needs to have been susceptible of influencing a consumer to buy the product (for what it is worth, I have never seen a claim of false advertising based on a misrepresentation about the medium through which the company is talking to the customer about the product).