[ad_1]
Microsoft’s Bing Chat AI chatbot has gained much popularity because it is free, connected to the internet, powered by GPT-4, and multimodal, boasting features the accessible version of ChatGPT doesn’t have. However, a new report shows that its responses may contain malware.
In March, a month after releasing its chatbot, Microsoft started experimenting with ads in its Bing Chat responses. The ads were placed within the chat experience in both the footnotes and by hovering over the response.
Also: Can generative AI solve computer science’s greatest unsolved problem?
Reports by Malwarebytes show that those ads are now being used for malvertising, the practice of using online advertisements that would interest users to spread malware and infiltrate unsuspecting users’ devices.
According to the report, scammers are inserting malicious ads so that they appear first over the organic result ad in the hover ad experience in Bing Chat.
For example, as seen below in the Advanced IP Scanner results, the first and most prominent link that shows up is the malicious ad. Underneath it, in tiny letters, is the actual organic ad that users would likely miss.
In clicking the first link, the report found that users were redirected to a website that filters traffic to separate the real victims from the bots.
Also: How to use Norton’s free AI-powered scam detector
Then, the real victims are redirected to a decoy page where they will be tempted to click on “free download” to download the installer that contains malicious files. To execute this scam, a malicious actor must have hacked into the ad account of a real business and created malicious ads.
The solution to avoiding this attack isn’t necessarily to stop using Bing Chat, as malicious ads can be anywhere online. Instead, users should be mindful before downloading things off the web to their devices and ensure the site is legitimate first.
[ad_2]
Source link