OpenAI Takes Chat History Feature Offline to Rectify the Problem
ChatGPT, the popular AI language model created by OpenAI, has potentially experienced its first significant privacy issue. A bug in the service recently exposed the titles of other users’ chat histories, prompting the company to take the chat history feature offline as it works to rectify the problem.
According to a spokesperson from OpenAI who spoke to Bloomberg, the titles of other users’ conversations were visible in the user-history sidebar located on the left side of the ChatGPT webpage. However, the spokesperson clarified that only brief descriptions of the conversations were exposed and not the actual content of the chats.
Investigation and Bug Origin
Although OpenAI is still investigating the exact cause of the privacy issue, the company has confirmed that it was due to a bug in an open-source software that has not been named. Screenshots of the bug were posted on social media platforms such as Reddit and Twitter, which led to concerns among users that either ChatGPT had been breached, or their accounts had been hacked.
OpenAI temporarily suspended the entire ChatGPT service on Monday night, but it was restored late that same evening. However, user chat histories were not available upon its return, and as of now, they are still missing. OpenAI’s status page indicates that the company is working to restore the conversation history to affected users.
Advice to Users
OpenAI recommends that ChatGPT users avoid sharing sensitive information during their conversations, as this data could potentially be used for training purposes. The recent privacy issue is a stark reminder of the warnings issued by the UK’s National Cyber Security Centre (NCSC) regarding generative AIs like ChatGPT. The NCSC expressed concern that sensitive user queries could be accessible to providers like OpenAI and used to improve future versions of the chatbot.
The NCSC’s warning about the accidental public disclosure of stored queries has proven to be accurate. However, in this case, there was no personally identifiable information exposed along with the conversation titles, which is a fortunate outcome.
Caution when using ChatGPT
This incident illustrates the need for caution when using ChatGPT and other generative AIs, especially in situations where sensitive information is discussed. Due to concerns over the possibility of sensitive information being leaked, companies such as Amazon and JPMorgan have cautioned their employees against using ChatGPT.
OpenAI has yet to release a statement on when the chat history feature will be restored. Nonetheless, the company’s swift action in taking the chat history feature offline and investigating the issue is commendable, and the incident serves as a reminder that all technology is fallible, and users should exercise caution when using AI language models like ChatGPT.