OpenAI released an article on Wednesday (5) in which it made claims about security related to ChatGPT. In addition to comments focused on the future of technology, the company issued satisfactions to the public regarding content harmful to children and users’ privacy. Additionally, the company reported improvements in the factual accuracy of version 4.0 of the chatbot.
According to OpenAI, tools like ChatGPT come with real risks. However, the company said it is dedicated to ensuring that security is built into the system at all levels.
She also said that before releasing anything new, the company does rigorous testing and works to “improve the model’s behavior with techniques such as reinforcement learning with human feedback.” However, the brand confirmed that despite carrying out extensive testing and research, it cannot foresee all the ways that people might abuse the technology.
As such, the “real world” has led OpenAI to develop increasingly diversified policies against behavior that poses a genuine risk to individuals, while still providing benefits to users.
General privacy and child safety
The safety of children and the privacy of users, reasons that caused the banning of ChatGPT in Italy, is an important issue, according to OpenAI.
She said she doesn’t sell data, but uses it to make the models more useful. For example: ChatGPT improves from the conversations individuals have with it.
Even though some of the references that feed the chatbot come directly from the internet, the company says that it wants the robots to learn about the world and not about each one’s private life.
To prevent this, the firm works to remove “personal information from the dataset wherever possible. Tries to tweak models to reject requests of this type and respond to requests from individuals to delete their private information from systems.”
Another important factor is the protection of children. In addition to requesting that only 18+ use the AI (13-17 only with parental approval), OpenAI wants more:
We have made a significant effort to minimize the potential of our systems to generate content that is harmful to children. For example, when users attempt to upload Child Safety Abuse Material to our imaging tools, we block it and report it to the National Center for Missing and Exploited Children.
GPT-4 hits more than the previous version
Still talking about security, the technology company highlighted the progress it was able to make with the ChatGPT 4.0 model. According to OpenAI, GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5.
The improvement of factual hits was also praised by the company. Through user feedback on information flagged as incorrect in the chatbot as a primary data source, the developers were able to improve the factual accuracy of GPT-4. They said it is 40% more likely to produce true content than its previous version.
With information: OpenAI.