ChatGPT and Cybercrime

ChatGPT, a generative AI technology, is currently the most talked-about AI platform topic. Though the intelligence of ChatGPT is amazing, it is a matter of concern for all sectors because this technology is being misused to commit cybercrime.

Currently large language model like ChatGPT has been used to create phishing emails aiming to deceive and steal personal information. It is estimated that generative AI can increase the number of victims who click on phishing emails from 100 clicks to 3,000 – 4,000 clicks per 10,000 emails.

ChatGPT can also create polymorphic malware that has a high potential to evade detection by the existing security control tools.

According to Palo Alto Networks’ report (a cybersecurity company that provides cloud-based cybersecurity services for users), it is said that the number of Android malware pretending to be the AI chatbot like ChatGPT has increased significantly. Targets of malware attacks are people interested in using ChatGPT tool.

Moreover, a Meterpreter Trojan disguised as a “ChatGPT” app sends premium-rate text messages to phone numbers in Thailand. It causes users to be charged without being aware of it, and brings about a huge income for criminals.

The point of concern is that Android users can download applications from various sources other than the Google Play Store, so there is a risk for users to obtain applications that have not been vetted by Google.

In addition, the FBI has warned that hackers are increasingly using generative AI like ChatGPT to create malware to attack users through both deception and fraud. According to predictions, the trend of AI adoption is increasing, and it can be used as an additional tool for normal crimes, such as AI voice generators that impersonate the voices of your acquaintances to increase their reliability.

Hackers can also use the most effective code from the old and leaked databases, including open source research. Thus, it is easier for hackers to create malware without requiring any coding knowledge, resulting in a surge of new malware creators. Although their quality and threat is still low right now, if they learn and become more skilled in the future, the risks will increase accordingly.

Sam Altman, the CEO of OpenAI and the creator of ChatGPT, expressed concerns about this topic by emphasizing the importance of AI and asked users to use it properly and transparently. He further said that we should consider AI as a global priority like dealing with pandemics and nuclear war. This is to reduce risks to humanity.

In conclusion, we can prevent the above-mentioned cyber-attacks by raising awareness regarding data security for all sectors, updating software and applications regularly, using software and applications only from reliable sources, and not collecting users’ personal information without consent to prevent data leaks.


Image by freepik

Read more:

“FraudGPT”, the start of a new era of digital risk

Guidelines for using ChatGPT as an assistant in software development