WormGPT: What to know about ChatGPT’s malicious cousin

[ad_1]

wormgpt

It was only a matter of time before the AI chatbot was emulated for malicious purposes — and one such tool is now on the market, known as WormGPT.

ZDNET screenshot via Twitter

When ChatGPT was made available to the public on November 30, 2002, the AI chatbot took the world by storm. 

The software was developed by OpenAI, an AI and research company. ChatGPT is a natural language processing tool able to answer queries and provide information based on datasets gleaned from datasets, including books and online web pages, and has since become a valued tool for on-the-fly information gathering, analysis, and writing tasks for millions of users worldwide. 

Also: The 5 best VPN services (and tips to choose the right one for you)

While some experts believe the technology could prove to reach an internet level of disruption, others note that ChatGPT demonstrates ‘confident inaccuracy.’ Students in droves have been caught plagiarising coursework by way of the tool, and unless datasets are verified, tools such as ChatGPT could become unwitting tools to spread misinformation and propaganda. 

Indeed, the US Federal Trade Commission (FTC) is investigating Open AI over its handling of personal information and the data used to create its language model. 

Beyond data protection concerns, however, whenever a new technological innovation is made, so are pathways for abuse. It was only a matter of time before the AI chatbot was emulated for malicious purposes — and one such tool is now on the market, known as WormGPT.

On July 13, researchers from cybersecurity firm SlashNext published a blog post revealing the discovery of WormGPT, a tool being promoted for sale on a hacker forum.

According to the forum user, the WormGPT project aims to be a blackhat “alternative” to ChatGPT, “one that lets you do all sorts of illegal stuff and easily sell it online in the future.”

Also: Scammers are using AI to impersonate your loved ones. Here’s what to watch out for

SlashNext gained access to the tool, described as an AI module based on the GPTJ language model. WormGPT has allegedly been trained with data sources including malware-related information — but the specific datasets remain known only to WormGPT’s author. 

It may be possible for WormGPT to generate malicious code, for example, or convincing phishing emails. 

WormGTP is described as “similar to ChatGPT but has no ethical boundaries or limitations.”

ChatGPT has a set of rules in place to try and stop users from abusing the chatbot unethically. This includes refusing to complete tasks related to criminality and malware. However, users are constantly finding ways to circumvent these limitations.

Also: GPT-4 is getting significantly dumber over time, according to a study

The researchers were able to use WormGPT to “generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.” The team was surprised at how well the language model managed the task, branding the result “remarkably persuasive [and] also strategically cunning.”

While they didn’t say if they tried the malware-writing service, it is plausible that the AI bot could — given that ChatGPT’s limitations do not exist. 

Also: Gmail will help you write your emails now: How to access Google’s new AI tool

According to a Telegram channel reportedly launched to promote the tool, posts viewed by ZDNET indicate the developer is creating a subscription model for access, ranging from $60 to $700. A channel member, “darkstux,” alleges that there are already over 1,500 users of WormGPT.

No. ChatGPT has been developed by OpenAI, a legitimate and respected organization. WormGMT is not their creation and is an example of how cybercriminals can take inspiration from advanced AI chatbots to develop their own malicious tools. 

Even in the hands of novices and your typical scammer, natural language models could turn basic, easily avoided phishing and BEC scams into sophisticated operations more likely to succeed. There’s no doubt that where money is to be made, cybercriminals will pursue the lead — and WormGPT is only the start of a new range of cybercriminals tools set to be traded in underground markets. 

Also: 6 skills you need to become an AI prompt engineer

It’s also unlikely that WormGPT is the only one out there. 

  • Europol: Europol said in the 2023 report, “The impact of Large Language Models on Law Enforcement,” that “it will be crucial to monitor […] development, as dark LLMs (large language models) trained to facilitate harmful output may become a key criminal business model of the future. This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge.”
  • Federal Trade Commission: The FTC is investigating ChatGPT maker OpenAI over data usage policies and inaccuracy.
  • UK National Crime Agency (NCA): The NCA warns that AI could prompt an explosion in risk to young people and abuse.
  • UK Information Commission’s Office (ICO): The ICO has reminded organizations that their AI tools are still bound by existing data protection laws.

Not without covert tactics, but with the right prompts, many natural language models can be persuaded to particular actions and tasks. 

Also: Real-time deepfake detection: How Intel Labs uses AI to fight misinformation

ChatGPT, for example, can draft professional emails, cover letters, resumes, purchase orders, and more. This alone can remove some of the most common indicators of a phishing email: spelling mistakes, grammar issues, and secondary language problems. In itself, this alone could cause a headache for businesses attempting to detect and train their staff to recognize suspicious messages.  

SlashNext researchers say, “cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”

Also: 7 advanced ChatGPT prompt-writing tips you need to know

For step-by-step instructions on using ChatGPT for legitimate purposes, check out ZDNET’s guide on how to start using ChatGPT

ChatGPT is free to use. The tool can be used to answer general queries, write content and code, or generate prompts for everything from creative stories to marketing projects. 

There is a subscription option, ChatGPT Plus, which users can sign up for. The subscription costs $20 per month and provides users with access to ChatGPT during peak times and otherwise, faster response times, and priority access to improvements and fixes.

AlsoHow to access, install, and use AI ChatGPT-4 plugins (and why you should)



[ad_2]

Source link