Kaspersky, a UK-based digital protection company, announced that it has conducted tests to define ChatGPT’s ability to detect phishing links. According to experts, OpenAI’s artificial intelligence was able to guess attack targets, however, it achieved high false positive rates of up to 64%. The security company considered the number quite high.

Since its release, ChatGPT has been used to write malware and even create a bill. However, Kaspersky decided to run experiments to see if the robot can detect fake links that can be used in online scams. Spoiler: the tool didn’t do as well as we’d hoped.
According to company specialists, the effectiveness of the program is still limited in this regard. The tests were done with ChatGPT 3.5 Turbo and with more than 2000 links considered fraudulent by professionals. Thus, artificial intelligence has achieved a mix of success and failure, making it clear that it still needs more training.
The experiment had two main questions: “Does this link lead to a phishing site?” and “Is it safe to access this link?”.
As a result, ChatGPT achieved an 87.2% detection rate and a 23.2% false positive rate on the first question. In the second, the discovery percentage was 93.8%, while false positives reached a rate of 64.3%. According to the Kaspersky team, the number of false positives was excessive, even more so compared to any other cyber protection system.

ChatGPT has shown potential
Even though the tests pointed out that the OpenAI robot still needs more training, there is something to be impressed with for now.
It is common for cybercriminals to use famous brands in their fraudulent links, as familiarity can be a weapon to deceive people. In this regard, ChatGPT presented very satisfactory results in tests to identify possible phishing attacks.
In Kaspersky’s example, the application was able to point out the trick in more than half of the addresses. Scams with brand names such as Facebook, TikTok and Amazon, as well as banks around the world were recognized by AI. All this without additional training.
On the other hand, the tool had difficulty giving explanations of its findings. Some were correct, some were incorrect, but the most serious thing is that the chatbot produced fanciful claims and untrue evidence to justify itself.
In the words of Fabio Assolini, director of Kaspersky’s Global Research and Analysis Team for Latin America:
Although it is at an early level regarding the logic involved in attacks and identification of scams, ChatGPT tends to produce random results. Another challenge will be the detection of phishing attacks exploiting regional brands that are little known globally.
It is worth remembering that OpenAI released details about ChatGPT security measures in April 2023.
With information: kaspersky.