Suresh
Suresh Venkatasubramanian, Brown University Computer Scientist and Former Assistant Director for science and justice at the White House Office of Science and Technology Policy | Twitter | Brown University

Venkatasubramanian: 'It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out'

ORGANIZATIONS IN THIS STORY

Earlier this month, the Federal Trade Commission submitted an extensive Civil Investigative Demand to the maker of ChatGPT. The investigation will explore whether the artificial intelligence company violated consumer protection laws, according to a July 13 AP News article.

“It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, said, according to AP News. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

The request comes after FTC Chair Lina Khan’s confirmation to enforce existing consumer protection laws intended to combat the potential dangers of artificial intelligence, AP News reported.

The investigation will determine whether OpenAI has been involved in unfair or deceptive private or data security practices or connected to deceptive practices that could cause consumers harm throughout its use of a “Large Language Model,” a violation of Section 5 of the FTC Act, the article said.

The review will also include OpenAI's model development and training, risk assessment and technical details, such as API integration. Large language models are AI algorithms that use large datasets to generate new content, according to AP News.

“The FTC requires that OpenAI describe in detail the data used to train or develop each Large Language Model product it has made available – such as ChatGPT – along with how it obtained that data,” the AP News report said. “The FTC goes on to inquire into how Large Language Models were trained, including the training process, the individuals involved, the role of human feedback and oversight.”

If the commission determines the finding to violate consumer protection laws, it may request a change in the way AI products are developed in the future, AP News reported.

“Showcasing the Commission’s continued interest in privacy, security and consumer protection, the FTC asks OpenAI to detail the risks that it identified while training products like ChatGPT,” the article stated. “As part of this, the FTC inquires into a known OpenAI data security incident along with the potential for future incidents.”

In correlation with the investigation, OpenAI has submitted a list of voluntary AI commitments, made in collaboration with Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The commitments include sharing safety risk information to other entities, investing in cybersecurity and publicly reporting model or system capabilities, limitations and domains of use, among other efforts, AP News said.

ORGANIZATIONS IN THIS STORY

More News