Artificial intelligence is already part of the daily lives of countless technology companies. Some have embraced the robots for good, while others maintain a more suspicious eye. Such is the case with Apple, which may have banned its employees from using ChatGPT and other AI tools. The apple would be afraid that leaks of its projects could occur with the frequent use of the OpenAI chatbot.
According to a report by the The New York Times, Apple decided to follow the same path as another company. Samsung has banned employees from using ChatGPT at work after some employees leaked sensitive data when asking the robot for help.
The American newspaper reported that the owner of the iPhone would be working on its own generative artificial intelligence, such as Google’s Bard and Microsoft’s Bing Chat. Therefore, the company would be concerned that professionals would end up letting important and secret information about future projects escape. It is worth remembering that the chatbot recently gained an app for the iPhone.
In addition, apple contributors would also have been advised not to use GitHub’s Copilot, as it is owned by Microsoft and uses OpenAI codes for its functions.
Apparently, “precaution” is the buzzword among large companies. Apple and Samsung are not alone in restricting employee use of ChatGPT and the like. Brands like Bank of America and JP Morgan have taken the same steps.
ChatGPT is convenient but still lacks security
Even though it offers a lot of usefulness in the daily lives of users, the OpenAI chatbot is not free from criticism and fears. The ban on brands like Apple and Samsung is understandable, as data leaks from the robot have occurred previously.
The owner of the tool ended up disclosing ChatGPT security details, in which she highlighted the efforts to improve privacy, at the same time, in which she gave satisfaction to her user base. In the report, the company highlighted that it does not sell data, but that it uses it to make artificial intelligence models more useful.
Still on the issue of security, the digital protection company Kaspersky carried out tests on the detection of phishing by ChatGPT. According to their results, the AI was able to guess attack targets, however, it achieved high false positive rates of up to 64%, which was considered a high number by experts.
As Brazil is one of the countries that most uses OpenAI artificial intelligence in the world, it is worth taking maximum measures to protect yourself from data leakage and cyber scams.
With information: mashable.