OpenAI has announced the introduction of new features designed to improve privacy for users of its ChatGPT platform. Responding to concerns about user data being stored and utilized to train future AI models, the company is now offering the ability to delete chat history and control the use of conversation data.
ChatGPT users can now turn off chat history, allowing you to choose which conversations can be used to train our models: https://t.co/0Qi5xV7tLi
— OpenAI (@OpenAI) April 25, 2023
With the new privacy features, users can choose to remove their prompts from OpenAI’s training models and history sidebar, granting them more control over their digital footprint. OpenAI stated in a recent blog post that disabling chat history would result in a 30-day retention period for new conversations, during which they would only be reviewed to monitor for abuse before being permanently deleted.
In the wake of a March incident in which a bug exposed some users’ personal information and chat history, OpenAI is not only focusing on enhancing privacy for casual users but also developing a ChatGPT business subscription tailored for professionals and enterprises seeking additional data management options. The upcoming ChatGPT Business offering will adhere to the company’s API data usage policies, ensuring that end-user data is not used to train AI models by default.
More privacy and customization over your ChatGPT experience & we'll roll out ChatGPT Business in the coming months https://t.co/5ix8pF0eht
— Mira Murati (@miramurati) April 25, 2023
For users who prefer to maintain a record of their conversations, a new “export” feature will allow them to receive an email containing their ChatGPT data, including questions, conversations, and related information. If chat history is not disabled, OpenAI will retain these conversations indefinitely for research purposes.
These changes follow OpenAI’s efforts to address user privacy and AI “hallucinations” since the release of the GPT-4 update. In April, the company reiterated its dedication to ensuring the safety and utility of AI technology and subsequently launched a bug bounty program.
The increased focus on user privacy comes as regulators worldwide grapple with the challenges posed by rapidly advancing AI tools. Earlier this month, Italy banned ChatGPT over privacy concerns, and other governments have also expressed apprehension.