The Center for AI and Digital Policy has filed a formal complaint with the U.S. Federal Trade Commission (FTC), alleging that OpenAI, the creator of the widely popular ChatGPT, has violated section five of the FTC Act, which targets deceptive and unfair practices. Marc Rotenberg, the founder and president of the Center, stated that the FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices, and he believes that the FTC should investigate OpenAI and its GPT-4.
The FTC updated the act last month to include language directed at developers of artificial intelligence programs like OpenAI, advising them to avoid exaggerating their capabilities, making deceptive performance claims, or promising superiority over non-AI products without adequate proof. The agency also warned developers to explore potential risks and impacts before launch and take responsibility for any errors or biases in their products.
Established in 2020 under the Michael Dukakis Institute for Leadership and Innovation, the Center for AI and Digital Policy is a Washington, DC-based non-profit. The FTC’s warning to developers of AI programs comes as emerging technologies like artificial intelligence and blockchain continue to gain popularity and become more mainstream.
The rapid dominance of the industry by GPT-4, the latest iteration of ChatGPT, has raised questions about OpenAI’s practices. The Center for AI and Digital Policy is calling on the FTC to investigate OpenAI to determine if the company has complied with the FTC’s rules.
The Center’s filing with the FTC comes days after several high-profile tech industry members, including Tesla and Twitter CEO Elon Musk, co-signed an open letter demanding a pause on developing AI systems like OpenAI’s GPT-4 platform. The letter called on all AI labs to pause for at least 6 months the training of AI systems more powerful than GPT-4. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
After +1000 tech workers urged pause in the training of the most powerful #AI systems, @UNESCO calls on countries to immediately implement its Recommendation on the Ethics of AI – the 1st global framework of this kind & adopted by 193 Member States💻https://t.co/BbA00ecihO pic.twitter.com/GowBq0jKbi
— Eliot Minchenberg (@E_Minchenberg) March 30, 2023
The Center for Artificial Intelligence and Digital Policy is not alone in calling for investigations into AI’s rapid development and the need for rules. The United Nations Educational, Scientific and Cultural Organization (UNESCO) released its own statement calling for a “Global Ethical Framework” in response to the challenges posed by AI. In November 2021, the 193 Member States of UNESCO’s General Conference voted to establish a global standard for artificial intelligence ethics by adopting the Recommendation on the Ethics of Artificial Intelligence. This framework aims to safeguard and advance human rights and dignity while serving as an ethical guide and foundation for promoting adherence to the rule of law in the digital realm.
.@UNESCO calls on countries to fully implement its Recommendation on the #Ethics of #AI immediately. This global normative framework, adopted unanimously by the 193 Member States of the Organization, provides all the necessary safeguards. #ChatGPT https://t.co/VZc6ueKLx9
— Audrey Azoulay (@AAzoulay) March 30, 2023
In conclusion, the complaint filed by the Center for AI and Digital Policy with the FTC against OpenAI highlights the importance of ethical standards in the development and deployment of AI systems. As AI continues to gain prominence in various industries, it is crucial to have a comprehensive set of rules and guidelines to ensure that the technology is used responsibly and with transparency. The UNESCO’s call for a “Global Ethical Framework” is a step in the right direction, and it is essential that governments, industry leaders, and regulatory bodies work together to establish a set of ethical standards for AI.