ChatGPT's AI Meme Trend Sparks Fraud Warnings

The Risks Behind AI Caricature Trends
Artificial intelligence (AI) has introduced a new trend where users can create caricatures of themselves based on their photos and personal details. This process, often done through platforms like OpenAI’s ChatGPT, involves uploading a picture along with information about one's role or job. While this might seem like a fun way to engage with AI, cybersecurity experts warn that it can pose serious security risks.
According to cybersecurity professionals, these social media challenges can be a goldmine for fraudsters. A single image combined with personal information can reveal more than users realize. Bob Long, vice-president at Daon, an age authentication company, highlights the dangers: “You are doing fraudsters’ work for them - giving them a visual representation of who you are.” He argues that the wording of such trends should raise red flags, as it resembles something a fraudster would start to make their job easier.
What Happens to Your Images?
When a user uploads an image to an AI chatbot, the system processes it to extract various data points. This includes information about the person's emotion, environment, and potentially even their location. According to Jake Moore, a cybersecurity consultant, this data may be stored for an unknown period of time.
Long explains that images collected from users can be used and retained to train AI image generators as part of their datasets. If a company like OpenAI experiences a data breach, sensitive data such as uploaded images and personal information could fall into the wrong hands. This could lead to exploitation by bad actors.
Potential Misuse of Personal Data
In the wrong hands, a high-resolution image could be used to create fake social media accounts or realistic AI deepfakes. Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company, warns that such images can help criminals move from generic scams to personalized, high-conviction impersonation.
OpenAI’s privacy settings state that uploaded images may be used to improve the model, which can include training it. When asked about the model’s privacy settings, ChatGPT clarified that this does not mean every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine how the system generates images.
How to Safely Participate in AI Trends
For those still interested in following the trend, experts recommend limiting what you share. Wilson advises users to avoid uploading images that reveal any identifying information. She suggests cropping tightly, keeping the background plain, and avoiding badges, uniforms, work lanyards, location clues, or anything that ties you to an employer or routine.
Wilson also cautions against oversharing personal information in prompts, such as job titles, cities, or employers.
Managing Privacy Settings
Moore recommends reviewing privacy settings before participating in AI trends. This includes the option to remove data from AI training. OpenAI has a privacy portal that allows users to opt out of AI data training by clicking on “do not train on my content.”
Users can also opt out of training from their text conversations with ChatGPT by turning off an “improve the model for everyone” setting. Under EU law, users can request the deletion of personal data collected by the company. However, OpenAI notes that it may retain some information even after deletion to address fraud, abuse, and security concerns.