AI, Automation, Targeting, and Data: Risk Assessment and Legal and Ethical Considerations | Event Recaps | All MKC Content | ANA

AI, Automation, Targeting, and Data: Risk Assessment and Legal and Ethical Considerations

Share        
ANA Law 1-Day Conference attendees: Scroll down for CLE materials

New state privacy laws require data controllers to conduct and document assessments of high-risk data processing, which specifically includes many marketing use cases, such as use of cookies, combining personal data from multiple sources, selling/sharing personal data, sensitive information processing and profiling, and automated decision making.

In addition, risk and impact assessments of the use of artificial intelligence will not only address obligations under these laws but also other legal and reputational concerns. The panel looked at how to operationalize assessments as part of a data governance program and the benefits of doing so, particularly in the context of marketing practices.

Key Takeaways

New data protection assessments include:

California / EDPB: The rules are still TBD, though they will likely be based on combination of European Data Protection Board's (EDPB) guidelines and Colorado requirements. At minimum, EDPB guidelines require a description of processing activity and personal data involved, context and purpose of processing, a risk-benefit analysis and measures to address risks, and involvement of all interested parties.

Virginia and Connecticut: Controller must analyze risks and benefits of processing activity to consumers and other interested parties. Analysis should factor use of deidentified data, consumers' reasonable expectations, context of processing activity. Keep assessments for reasonable period, as the State AG can request to review assessments.

Colorado: Conduct risk-benefit analysis and enumerate safeguards and measures taken to offset risks identified. This requires input of several different internal and external stakeholders. It's important to review and update as often and as appropriate to address risks, considering type, amount and sensitive of data processed. If processing activity for profiling, must be updated at least annually. Retain assessments for three years minimum.

Sensitive Data Uses and Targets

When it comes to sensitive personal information and sensitive data, such as health data, there are data protection assessments required in California, Colorado, Connecticut, and Virginia. Restrictions on secondary uses of consumer health information by digital health apps and non-HIPAA covered businesses handling health data under the FTC Health Breach Notification Rule.

When it comes to children's personal information, effective July 1, 2023 is the California Age Appropriate Design Code Act. Further, this also requires data protection assessment that identifies purpose for online service; how service uses children's personal information; risks of material detriment to children; and strategy and measures to address same.

The assessment must address whether algorithms and targeted advertising systems could harm children, and practices that include collection/processing of children's sensitive personal information. These must be maintained for as long as service likely to be accessed by children, have a biennial assessment review, and must be provided to California Attorney General within five business days of receipt of written request.

When it comes to targeted ads, consumers have the right to opt out of targeted advertising in California, Colorado, Connecticut, and Virginia. When it comes to profiling, people have a right to opt out of certain processing activities (including profiling). Depending on jurisdiction, opt out right may be limited to decisions producing LSSE. In Colorado, must obtain affirmative consent for profiling (following consumer opt out).

AI risks include:

  • Misinformation, disinformation, and false news
  • Programmatic advertising on unreliable AI-generated news and information websites (UAINs)
  • Outdated training data
  • No consent for images used as training data
  • Lack of transparency to customers, consumers
  • Legal Risks
  • Security and intellectual property (IP)
  • Confidential information leak
  • Plagiarism and infringement with AI-generated content mimicking competitors or other recognizable brands
  • Bias in regard to gender and racial stereotypes

AI and consumer protection laws for the FTC include:

  • False Advertising: No unsubstantiated claims about AI
  • Synthetic Media: Watch out for scammers using deepfakes, voice cloning and AI-generated content to deceive consumers.
  • Discrimination: Beware of unrepresentative training data and biased decision-making.

Copyright law example with AI:

The case, Getty Images versus Stability AI, was filed in February 2023, and asserted that Stability AI, through its model Stable Diffusion, used Getty Images content for training purposes and infringed Getty's copyrights. Stability AI apparently copied 12,000,000 copyrighted images, along with captions and metadata.

In its generated images, Stability AI even included a distorted "Getty Images" watermark. According to Getty Images, Stable Diffusion is able to generate artificial images because it was trained on proprietary content belonging to Getty Images and others. Stable Diffusion produces images that are highly similar to and derivative of the Getty Images proprietary content. The plaintiff requests injunction, destruction of Stable Diffusion versions that were trained on Getty Images, actual damages and profits, and statutory damages.

Key Takeaways

  • Establish an assessment policy and procedure.
  • Learn how the AI system works.
  • Provide transparency about how user data is used and whether the AI system is involved in active litigation and why.
  • Know specific use cases, inputs, and expected outputs of the AI system.
  • Determine which laws apply.
  • Document the risk management process for each use case. Stay current on legal, technological, and reputational developments.
  • Managing risk in an uncertain regulatory and technological environment requires a top-down and bottom-up compliance.

CLE Materials

Source

"AI, Automated Decision Making, and Profiling, Targeted Advertising, and Sensitive Data: Risk Assessment and Legal and Ethical Considerations, Currently and on the Horizon." Julia Jacobson, partner at Squire Patton Boggs; Alan Friel, partner at Squire Patton Boggs; Gicel Tomimbang, associate at Squire Patton Boggs; Charlotte Murphy, senior counsel of global marketing at The Coca-Cola Company. ANA Law 1-Day Conference, 7/19/23.

Share        
You must be logged in to submit a comment.