NIX Solutions: ChatGPT AI Bot Began to Be Used to Create Viruses

Following Google, the ChatGPT chatbot, which is gaining popularity, was also recognized as a threat by experts from Check Point Research. True, in this case we are not talking about competition among search engines – as it turned out, hackers have already learned to use “chatty” AI to write malicious code.

NIX Solutions

An important feature of the chatbot developed by OpenAI is the ability to work in a conversational mode, supporting requests in natural languages. Although technically the terms of the algorithm prohibit the creation of malicious software, hackers, according to 4PDA, have found a way to get around this restriction by “persuading” AI to help them create code.

As an example, the researchers cited a program written in Python that searches for files, including images and Microsoft Office documents, copies them, and then uploads them to the attacker’s server. In another case, a Python script is used to encrypt and decrypt files, and with some refinement, it can turn into ransomware.

Another user managed to write Java-based code using a chatbot, which, in conjunction with PowerShell, can be used to stealthily download and run other malware on infected systems. NIX Solutions adds that at the end of last year, a participant in one of the hacker forums demonstrated a tool for creating scripts that allow you to get information from other users’ accounts.

According to experts, it is difficult to say how effective a cyber attack generated using ChatGPT will be. In addition, from a technical point of view, it is extremely difficult to determine whether the malware was written using this particular AI bot.