ChatGPT's AI Meme Trend Sparks Fraud Fears, Experts Warn

The Growing Trend of AI Caricatures and the Hidden Risks
Artificial intelligence (AI) has introduced a new trend where users can generate caricatures of themselves based on their photos and personal details. This trend, which involves uploading images to platforms like OpenAI’s ChatGPT, may seem harmless at first glance. However, cybersecurity experts are warning that it could pose serious security risks.
Users typically upload a photo of themselves along with information about their job or company logo and ask the AI to create a visual representation of them in a professional context. While this might appear as a fun social media challenge, experts caution that such activities can expose sensitive personal data.
Why AI Caricatures Are a Security Concern
Cybersecurity professionals highlight that these AI-generated images can provide fraudsters with valuable information. According to Bob Long, vice-president at Daon, a company specializing in age authentication, social media challenges like AI caricatures can act as a "treasure trove" for malicious actors. A single image paired with personal details can reveal more than users realize.
Long emphasized that the very nature of the challenge raises red flags. “It sounds like it was intentionally started by a fraudster looking to make the job easy,” he said.
What Happens to Your Image Once It's Uploaded?
When users upload an image to an AI chatbot, the system processes it to extract various types of data. Cybersecurity consultant Jake Moore explained that this includes analyzing emotions, environments, and even clues about a person’s location. This information may be stored for an unknown duration.
Moreover, the images collected from users could be used to train AI image generators. As Long pointed out, this means that the data is not just being processed—it’s being added to datasets that help improve AI models.
A data breach at a company like OpenAI could result in sensitive information falling into the wrong hands. This could lead to the creation of fake social media accounts or deepfakes used for scams.
The Potential for Identity Theft and Scams
Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company, warned that high-resolution images could be exploited to create realistic AI deepfakes. These could then be used for impersonation scams.
“Selfies help criminals move from generic scams to personalized, high-conviction impersonation,” she said.
OpenAI’s privacy settings state that uploaded images may be used to improve the model. However, the company clarified that this does not mean every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine its image generation capabilities.
How to Safely Participate in AI Trends
For those who still want to engage with AI trends, experts recommend taking precautions. Wilson advised users to avoid uploading images that reveal identifying information. She suggested:
- Cropping tightly to remove unnecessary details
- Keeping the background plain
- Avoiding badges, uniforms, work lanyards, or any clues that tie you to an employer or routine
Wilson also cautioned against sharing personal information in prompts, such as job titles, cities, or employers.
Managing Privacy Settings
Moore recommended reviewing privacy settings before participating in AI trends. He highlighted the importance of checking options to remove data from AI training.
OpenAI provides a privacy portal where users can opt out of AI data training by clicking on “do not train on my content.” Users can also disable the “improve the model for everyone” setting in their text conversations with ChatGPT.
Under EU law, users have the right to request the deletion of personal data collected by the company. However, OpenAI notes that some information may still be retained to address fraud, abuse, and security concerns.