Android users beware: ChatGPT AI chatbot malware targets cellphones 2023

In a recent report published by Palo Alto Networks Unit 42, researchers have uncovered a significant increase in Android-specific malware. The malware, camouflaged as the well-known AI chatbot ChatGPT, has been targeting smartphone users without their knowledge.

The increase in malware variants coincided with OpenAI’s publication of GPT-3.5 and GPT-4. Cybercriminals exploited the popularity of the ChatGPT application to infect victims anxious to interact with the AI chatbot.

Two varieties of malicious software that pose a threat to Android users have been identified by researchers. The first type masquerades as an ostensibly innocuous “SuperGPT” application, but is actually a Meterpreter Trojan. This malicious application enables cybercriminals to obtain unauthorized remote access to Android devices.

The second form of malware masquerades as a legitimate “ChatGPT” application, but it is malicious. Once installed, this application transmits text messages to Thailand’s premium-rate numbers.

ChatGPT AI chatbot malware targets cellphones.

In exchange for specific services or information, premium-rate phone numbers, which incur higher fees than standard phone numbers, are frequently used.

In this instance, the perpetrators of the malware collect the proceeds. However, these premium-rate numbers can also be used for schemes and other fraudulent activities.

Additionally, researchers have discovered a Trojanized Android Package Kit (APK) application developed on the most recent ChatGPT version. This infected application, if effectively exploited, allows cybercriminals to obtain remote control over Android devices.

In addition, the researchers discovered a concentration of APK malware samples masquerading as innocent web pages providing information about ChatGPT. Underneath this guise, however, lies a malicious intent, as these threats are designed to compromise the devices of unwary users.

The fact that all of these APK malware samples use the OpenAI logo, which is commonly associated with the ChatGPT AI tool, adds to the deception.

By using the recognizable logo as the icon for their application, cybercriminals intend to create the appearance of legitimacy and association with ChatGPT.

This recent increase in Android malware poses a significant threat to smartphone users who are being targeted unknowingly by cybercriminals attempting to exploit their trust in popular AI applications such as ChatGPT.

Users are advised to exercise caution when downloading applications, especially those professing affiliation with ChatGPT or OpenAI.

Implementing mobile security measures, such as maintaining devices up-to-date and utilizing reputable antivirus software, can reduce the likelihood of falling victim to these pernicious attacks.

Leave a Reply