Samsung doesn’t have a good track record of “guarding” the news and development of its smartphones. There is always a leaker or even the website of an international subsidiary to reveal something. This time, the South Korean went through three “leaks” more serious than a cell phone specification – employees used ChatGPT to deal with sensitive information.

Samsung urged employees to be careful when using popular artificial intelligence to help with work, either with code review or summarizing a meeting. These two cases led to three “shares” of confidential company data. The cases happened in 20 days since the use of ChatGPT was released in the company — one incident per week, approximately.
Samsung wants employees to use ChatGPT responsibly
It’s not that Samsung doesn’t want its employees to stop using AI, the company wants its use to be more responsible — while ChatGPT improves work performance. No asking the tool to review codes or minutes of confidential meetings.
The South Korean released the use of ChatGPT in its division of semiconductors and devices, responsible for manufacturing chips. Other Samsung divisions prohibit access to AI. However, the newspaper that published the case is Korean.

The translation of the Korean newspaper does not make it clear whether the ChatGPT used is OpenAI’s “standard”, accessed through the website, or some API. In any case, both cases take the prompts entered for a foreign company, unrelated to Samsung. Recently, an AI bug allowed users to view other people’s conversations with ChatGPT.
While Samsung pulls the ear of employees and promises “punishments” to those who fail to comply with the guidance, the manufacturer also starts the development of its own generative AI. In the near future, company employees will be able to use a “SamsungGPT” to compile data and make code reviews. Thus, confidential information will stay at home.
Samsung data “leak” on ChatGPT

O The Economist, a South Korean economic newspaper, released the three “leaks” on Monday (3). In one case, an employee posted the source code of a factory database measurement program on ChatGPT to find errors.
The other incident involving codes was more serious, as an employee asked the AI to optimize the program that identifies which factories are efficient and which are lacking in performance.
In the third case, the last straw for Samsung, an employee used ChatGPT to take minutes of a meeting.
In its guidelines, OpenAI asks users not to disclose sensitive or personal data in their conversations with artificial intelligence.
With information: TechSpot It is The Economist