Artificial Intelligence (AI) and Marketing Ethics | Ethics Issue Alerts | Industry Insights | All MKC Content | ANA
Ethics Alert Series

Artificial Intelligence (AI) and Marketing Ethics

By Senny Boone


Artificial intelligence refers to the ability of machines to learn and make decisions based on data and analytics in real time. With the advent of AI and synthetic media across the marketing ecosystem, ethics may not always be factored in and may instead become an afterthought which can lead to risk for brands. With new technological innovations like chatbots and connected devices comes immense processing capabilities, speed, efficiency, and great responsibility. It is an exciting time for marketers; however, you must balance and factor in any ethical considerations before utilizing these new techniques in a particular marketing or advertising scenario. Consequently, the need for a human review of AI and marketing has never been as great as innovation is propelling greater output and scale.

Every aspect of business will be using some form of AI in its work hereafter. For marketers, chatbots aid customer service, digital marketers can predict behavior for ad targeting and marketers can gauge reactions to their brand via sentiment analysis. Use is increasing — McKinsey cited 56 percent of respondents have adopted AI in its The State of AI in 2021.

Below are recent examples of ethical challenges facing AI uses:

Mitigation of Bias: Financial incentives are pushing rapid deployment of AI without adequate review. For example, ChatGPT has been utilized by millions since it was released this past November, 2022, and has seen widespread adoption. However, when tested, its usage demonstrated bias, gender stereotyping and factual inaccuracies that need human oversight to correct. Learn more.

Synthetic Images: In addition to concerns over search results and language generated, synthetic images can also present ethical challenges. A key question is whether AI-based generative models avoid copyright protection and accountability. Creative artwork may be utilized without adequate attribution for campaigns. Additionally, creative artwork can be created in seconds, placing human artists in a problematic position of competing against machine-generated art and imagery.

Health Information: Health information and health care can pose another example of risk and concern without informed consent and review. ChatGPT as a large language model chatbot can offer inexpert diagnosis for health conditions and treatment plans. Koko — a free peer-to-peer therapy platform — used the same GPT-3 large language model that powers ChatGPT to generate therapeutic comments for users experiencing psychological distress. Users on the platform who wished to send supportive comments to other users had the option of sending AI-generated comments rather than formulating their own messages. Koko's co-founder Rob Morris reported: "Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own," and "Response times went down 50 percent, to well under a minute." However, the experiment was quickly discontinued because "once people learned the messages were co-created by a machine, it didn't work." Koko has made ambiguous and conflicting statements about whether users understood that they were receiving AI-generated therapeutic messages. Learn more.

Chat Feature and Lawsuits:

Foot Locker is the target of a new privacy-related lawsuit regarding its chat feature, a concern for many retailers.

Chatbots are featured on many websites where a pop-up invites us to speak with a robot or human employee. The feature has become nearly ubiquitous on e-commerce sites for marketing, sales, and customer service purposes. However, brands or service providers with this capability should be aware of potential legal issues and how to avoid them.

The lawsuit against Foot Locker alleges that because the company is recording chat conversations, archiving them, and sharing them with analytics partners to gather insights, the company is illegally wiretapping. The lawsuit could potentially cost Foot Locker up to $25 million.

Guidelines and Principles:

As principles continue to be developed, it is useful to "go back to basics" for marketers to ensure you are aware of basic consumer expectations and regulatory requirements.

Industry Guidelines for Ethical Business Practices (provided by the Center for Ethical Marketing)

Principles: An ethical and accountable marketer will review these principles as related to the use of AI -

  1. Is committed to customer satisfaction, good corporate citizenship, and responsible environmental, community, and financial stewardship. For example, what impact will the use of AI have on the company's community including its employees? Does it enhance these goals for your organization?
  2. Clearly, honestly, and accurately represents its products, services, and terms and conditions. AI can be beneficial to provide accuracy and clarity, but if terms and conditions change, will there be sufficient planning and updating?
  3. Delivers its products and services as represented. Will AI enhance product delivery?
  4. Communicates in a respectful and courteous manner. Will use of AI increase respect toward others and not create stereotypes? Will the use of chatbots increase or decrease customer relationship building?
  5. Responds to inquiries and complaints in a constructive, timely way. AI can greatly benefit customer service to provide timely responses so long as the response solves the issues of concern.
  6. Maintains appropriate security policies and practices to safeguard data. AI data usage must be factored into any data security planning to protect data based on current rules and regulations. (Marketing Principles 1-6, Guidelines for Ethical Business Practices)

Consent: Consent means an individual's action in response to a clear, meaningful, and prominent notice regarding the collection and use of data. Will the use of automated responses and real time consent help or undermine a user's informed and knowing agreement? (Guidelines for Ethical Business Practices, Definitions p.2)


Company Examples of AI Disclosure and Use:

Leading innovators in the space have issued useful principles to provide guideposts as they develop AI. Here are two examples from Google and IBM:

Google states the following with regards to prohibited AI uses:

AI applications we will not pursue:

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.

See Google's Ethical Principles for AI —



For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm's recommendations. If we are to use AI to help make important decisions, it must be explainable.

IBM will make clear:

  • When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.
  • The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.
  • That while bias can never be fully eliminated, and our work to eliminate it will never be complete, we and all companies advancing AI have an obligation to address it proactively. We therefore continually test our systems and find new data sets to better align their output with human values and expectations.
  • The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.
  • Our firm support for transparency and data governance policies that will ensure people understand how an AI system came to a conclusion or recommendation.

Download IBM's Policy on AI and Ethics

Federal Trade Commission:

The FTC is warning marketers and organizations to avoid making claims about AI and overpromising what it can do. See this important guidance:

ANA Resources:

If you have questions or want to get more involved in marketing and ethics, please contact We look forward to collaborating with you in our shared efforts to ensure good business practices, consumer protection in the marketplace, and consumer trust by providing accountability.


"ETHICS ALERT: Artificial Intelligence (AI) and Marketing Ethics." Senny Boone, ANA SVP, ANA Center for Ethical Marketing, 3/16/22.