ChatGPT AI Deepfakes: A Fraud Risk, Experts Warn

Artificial intelligence (AI) has introduced a new trend where users can generate caricatures of themselves using chatbots. This process involves uploading a photo along with details about their role or a company logo, and then asking the AI to create a visual representation based on the information it has. However, cybersecurity experts warn that this seemingly harmless activity could pose significant security risks.

According to cybersecurity professionals, social media challenges involving AI-generated caricatures can provide fraudsters with valuable personal information. A single image, when combined with personal details, can reveal more than users might realize. Bob Long, vice-president at Daon, an age authentication company, emphasized that participating in such trends essentially helps fraudsters by giving them a visual identity of the user.

Long pointed out that the phrasing of these challenges should be a red flag. He argued that the way these trends are presented "sounds like it was intentionally started by a fraudster looking to make the job easy."

What happens to images once they are uploaded?

When a user uploads an image to an AI chatbot, the system processes it to extract various data points, such as the person’s emotion, environment, or even clues about their location. According to Jake Moore, a cybersecurity consultant, this information may be stored for an unknown period of time.

Long explained that the images collected from users can be used and retained to train AI image generators as part of their datasets. If there is a data breach at a company like OpenAI, sensitive data—including uploaded images and personal information—could fall into the wrong hands and be exploited.

Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company, warned that a high-resolution image could be used to create fake social media accounts or realistic AI deepfakes. These could be used to run scams, particularly if the image includes identifying details.

OpenAI’s privacy settings state that uploaded images may be used to improve the model, which can include training it. When asked about the model’s privacy settings, ChatGPT clarified that this does not mean every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine how the system generates images.

What to do if you want to participate in AI trends

For those who still want to follow the trend, experts recommend limiting what you share. Wilson advised users to avoid uploading images that reveal any identifying information. She suggested cropping tightly, keeping the background plain, and avoiding badges, uniforms, work lanyards, location clues, or anything that ties the user to an employer or routine.

Wilson also cautioned against oversharing personal information in the prompts, such as job titles, cities, or employers.

Moore recommended reviewing privacy settings before participating in such trends. This includes the option to remove data from AI training. OpenAI provides a privacy portal where users can opt out of AI data training by clicking on “do not train on my content.”

Users can also opt out of training from their text conversations with ChatGPT by turning off an “improve the model for everyone” setting. Under EU law, users can request the deletion of personal data collected by the company. However, OpenAI notes that it may retain some information even after deletion to address fraud, abuse, and security concerns.