The Federal Trade Commission (FTC) is increasingly focusing on the potential and real-world harms associated with artificial intelligence (AI). The agency is particularly concerned with issues ranging from commercial surveillance to fraud, impersonation, and illegal discrimination. Consumers are interacting with AI in various ways, including customer service chatbots, educational tools, social media recommendation systems, facial recognition technology, and decision-making tools for health care, housing, employment, and finance.
The FTC emphasizes that companies deploying AI systems must comply with existing laws related to competition and consumer protection. "Firms deploying these AI systems and tools have an obligation to abide by existing laws," according to the FTC. The agency can scrutinize whether these tools violate privacy or are susceptible to adversarial inputs that put personal data at risk. The FTC also examines generative AI tools used for fraud or manipulation and assesses algorithmic products that make decisions in high-risk contexts such as health and finance.
Recent casework underscores the need for companies to prevent harm before and after deploying AI products. For example, the FTC alleged that Rite Aid failed to take reasonable measures in its use of facial recognition technology (FRT), which falsely identified consumers as shoplifters. "The complaint further alleged that the company failed to take 'reasonable steps' after deploying 'to regularly monitor or test the accuracy of the technology.'"
The FTC has also taken steps against AI-related impersonation and fraud. A new rule was finalized to combat impersonation using AI-generated deepfakes. Additionally, a Voice Cloning Challenge was launched to protect consumers from unauthorized voice cloning software misuse.
Deceptive claims about AI tools have been another focus for the FTC. Cases have been brought against companies like Evolv Technologies for misleading claims about security screening products and Intellivision Technologies for deceptive facial recognition software claims.
Ensuring privacy and security by default is critical when dealing with generative AI tools requiring vast data inputs. The FTC has previously issued complaints against Amazon's Alexa over its data retention practices.
These actions reflect the FTC's broader enforcement efforts concerning both company claims about their products and actions ensuring they do not harm consumers or violate laws. "Each case illustrates how the FTC’s enforcement efforts encompass both the claims that companies make about their own products," states the agency.
The article concludes by emphasizing ongoing work relating to AI competition concerns highlighted by Chair Khan: "AI will continue to evolve in ways that will reveal both the benefits and risks of technology."