ChatGPT AI Meme Trend Sparks Fraud Warnings, Experts Say

The Rise of AI Caricature Trends and Their Security Implications
Artificial intelligence (AI) has introduced a new trend on social media, where users can generate caricatures of themselves based on their photos and personal details. This trend, which involves using platforms like OpenAI’s ChatGPT to create visual representations of individuals and their roles, has sparked concerns among cybersecurity experts.
According to these experts, the act of uploading a photo along with company logos or job-related information can expose users to significant security risks. The images and data shared in this process may be used by malicious actors to gather valuable insights about an individual.
Why AI Caricatures Are a Cause for Concern
Bob Long, vice-president at Daon, a company specializing in age authentication, warned that social media challenges involving AI caricatures could be a goldmine for fraudsters. He pointed out that such trends might be intentionally created by cybercriminals to make their work easier.
“By participating in these challenges, you are essentially doing the work for fraudsters,” Long said. “You are giving them a visual representation of who you are.”
The way these challenges are worded can also raise red flags. According to Long, the phrasing often sounds as if it was designed by someone with malicious intent.
What Happens to Your Images?
When a user uploads an image to an AI chatbot, the system processes the data to extract various details. These may include the person's emotional state, the environment they are in, or even clues about their location. This information could then be stored for an unspecified period.
Jake Moore, a cybersecurity consultant, explained that the images collected from users could be used to train AI image generators. This means that the data uploaded by individuals might end up in the datasets used to improve these systems.
A data breach at a company like OpenAI could lead to sensitive information, including uploaded images and personal details, falling into the wrong hands. Cybercriminals could then exploit this data for various fraudulent activities.
Risks of High-Resolution Images
Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company, highlighted the dangers of high-resolution images. She warned that such images could be used to create fake social media accounts or realistic AI deepfakes, which could be employed in scams.
“Selfies help criminals move from generic scams to personalized, high-conviction impersonation,” she said.
OpenAI’s privacy settings mention that uploaded images may be used to improve the model, which can involve training it. However, when asked about these settings, ChatGPT clarified that not every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine how the system generates images.
Tips for Participating in AI Trends Safely
For those who still want to take part in these AI trends, experts recommend being cautious about what you share. Charlotte Wilson advised users to avoid uploading images that reveal any identifying information.
“Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said.
Wilson also cautioned against sharing personal information in prompts, such as job titles, cities, or employers.
Managing Privacy Settings
Jake Moore suggested reviewing privacy settings before participating in AI trends. This includes checking options to remove data from AI training. OpenAI provides a privacy portal where users can opt out of AI data training by clicking on “do not train on my content.”
Users can also choose to turn off the “improve the model for everyone” setting in their text conversations with ChatGPT. Under EU law, users have the right to request the deletion of personal data collected by the company. However, OpenAI notes that some information may still be retained for fraud, abuse, and security purposes.