16

There have been cases in which a chatbot "lies" and claims to be a human representative of a company, such as there is at least one company that is intentionally making chatbots (or, perhaps, simply adapting those made by the major players) to lie about their identity.

One might imagine that if a corporation used a chatbot that lied about its identity, that might run afoul of laws around false advertising or fraud, since false information is being given in the context of commercial transactions. Is there any civil or criminal liability associated with a company using a chatbot, or with a company creating a chatbot? If so, does this liability depend on intentionality (e.g. whether the chatbot was intentionally instructed to lie)?

Answers from all jurisdictions are welcome.

2

4 Answers 4

13

Deceptive or misleading conduct in trade or commerce is an offence

Knowledge that the information is wrong is not an element of the offence. So, if the chatbot is making statements about itself or the organisation, its products etc. and they are deceptive or misleading, and the organisation is engaged in trade or commerce when that happens, then the organisation has committed the offence.

8
  • 1
    I agree. I get many phone calls from chatbots from spoofed numbers that ... oh there's one now ... numbers that belong to other people. They are only trying to harvest information about me. The "organization" is a boiler room of scammers who will turn my information over to other criminals. This is an offense, but little is done about it. It should be a felony, as well as an annoyance. Commented Nov 25 at 18:51
  • You cannot sue a tool, you can only sue a legal entity (company) or a human. Deception and misleading require either proof of intent or reasonable negligence, but you cannot inherently use a tool malfunction as an argument of intent/negligence on the company's behalf. Failing to remedy the issue when pointed out, however, can be construed to be an offence, but that's not the question being asked here. Commented yesterday
  • @Flater - Please do feel free to write an answer explaining why this one is wrong. Do note that "failing to remedy the issue when pointed out" is an element of the question (as implicit in both examples), and that the question also asks about intentionality (chatbots specifically created to lie about their identity and used for that purpose) and not simply about "malfunctions," as mentioned in one of the examples. Commented yesterday
  • 1
    Perhaps. When you write your answer about how either intent or negligence is required, you might consider also addressing the Australian government's assertion, linked in this answer, that "It makes no difference whether a business intends to mislead or not." Commented yesterday
  • 1
    @Flater intentionally is not an element of the offense. If the statements (or silence) is deceptive or misleading, the offense has been committed, even if the organization did not intend it. Note that everything you say can be 100% true and still be deceptive and misleading. Or, 100% false and not be. Commented yesterday
26

A chatbot has no intentions or independent will. A chatbot is a tool of the company that the company uses to make representations, just as the company uses any other part of its website.

A company should be careful about the repesentations it allows to be communicated to customers, including via a chatbot.

See e.g. Moffatt v. Air Canada, 2024 BCCRT 149. Air Canada represented to a customer, via a chatbot, that the customer could apply for a bereavement fare retroactively. Air Canada attempted to renege from this representation. The British Columbia Civil Resolution Tribunal found that to be negligent misrepresentation and required Air Canada repay the customer the difference between what the customer actually paid and what a bereavement fare would have been.

If you're asking specifically whether the mere representation that the chatbot is a human can be the basis for a claim, that is a narrower question.

If a company represents to a customer that the interaction they're having with the company is with a live human being, when in fact, the interaction is with a chatbot, that could lead to:

  • a claim in negligent misrepresentation, but only if the customer's reasonable reliance on the representation of human interaction led to damages;
  • a claim for fraud if the misrepresentation was intentional by the company, and if the misrepresentation was material and caused damages;
  • a claim of false advertising under the Competition Act if the representation is "misleading in a material respect" — to be "material" the misrepresentation needs to have been susceptible of influencing a consumer to buy the product (for what it is worth, I have never seen a claim of false advertising based on a misrepresentation about the medium through which the company is talking to the customer about the product).
1
  • Comments have been Law Chat. Comments continuing discussion may be removed. Commented Nov 24 at 1:37
9

If the chatbot gives false information about the product, the company, or the consumer's rights, the company may be liable for false advertising in the same way it would if it misrepresented its products/itself/the consumer's rights in human-made communications.

Assuming the only false information the chatbot gives is by lying about being a human, as long as all other information about the product and the company behind it is true, then this in itself isn't a liability according to Articles L121-1 à L121-15-3 of the Code de la Consommation.

Note that article L121-1-2°-f specifies that in advertising, the "Identity, qualities, aptitudes and rights of the profesionnal" must be clearly identifiable. If the chatbot is the thing rendering the service (say, it pretends to be a human giving you tech support) then the company is probably liable for false advertising, because the identity and qualities of the professional are misrepresented.

3
  • Of course, a chatbot that pretends to be human can't give an "I'm a bot and might lie." disclaimer to mitigate the risk of lawsuits for false advertisement. Commented Nov 24 at 14:08
  • 2
    @Brian it's unclear to me if a "chatbots can be innacurate" disclaimer would be sufficient to protect the company from liability, for the same reasons humans can't make an ad that disparages the competition then goes "*some information could be incorrect". the fact a chatbot is sometimes inaccurate is irrelevant, if the company produced false advertising then it is liable for it. Commented Nov 24 at 14:40
  • 3
    Humans also give inaccurate information sometimes. We deal with that if it causes damages, we don't usually punish it just for being inaccurate per se. @Brian Commented Nov 24 at 15:46
7

This might fall under the EU AI Act's prohibited systems, depending on the effect the practice has on the customer:

  1. The following AI practices shall be prohibited:

(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.