Finding a deepfake of yourself is distressing. Finding one of your child, partner, or colleague is equally confronting. What you do in the first hour matters. Act fast, follow these steps in order, and know that Australian law gives you real options to get the content removed and hold the creator accountable.
Do not contact the person who created or posted the deepfake directly. Do not threaten them publicly. Do not share the content further, even to "warn" others. Every share increases the reach. Preserve evidence quietly, then report through official channels.
Step-by-Step Response
Screenshot and Save Everything
Before you report anything, capture evidence. Once you report content for removal, the platform or creator may delete it -- and you lose your proof.
What to capture:
- Screenshots of the deepfake content itself (multiple frames if it's a video)
- The URL of the page where it appears
- The username and profile URL of the account that posted it
- Any comments, captions, or descriptions attached to the post
- The date and time you discovered it
- Screen recordings if the content is a video (use your phone's built-in screen recording)
Save everything to a folder on your device and back it up to cloud storage. You'll need this for platform reports, the eSafety Commissioner, and potentially police.
Report to the eSafety Commissioner
Australia's eSafety Commissioner (esafety.gov.au) has legal authority to order platforms to remove content. This is your strongest tool.
For intimate image abuse (including deepfake intimate content): The eSafety Commissioner can issue a removal notice within 24 hours. Platforms that fail to comply face penalties. File a complaint at esafety.gov.au/report.
For other types of deepfake content (fraud, defamation, harassment): The eSafety Commissioner can still assist, particularly if the content targets an Australian citizen. They work directly with major platforms and have a track record of getting content removed faster than individual reports.
When filing your complaint, attach all the evidence you saved in Step 1. Be specific about what the content shows, where you found it, and the harm it's causing.
Report to the Platform
Report directly to the platform where the deepfake appears. Do this in addition to (not instead of) the eSafety Commissioner report.
YouTube: Use the "Report" button under the video. Select "Infringes my rights" then "Other legal complaint". YouTube also has a specific process for non-consensual intimate imagery at support.google.com/youtube.
Facebook/Instagram (Meta): Use the three-dot menu on the post and select "Report". For intimate imagery, use Meta's dedicated form. Meta has committed to removing reported deepfake intimate images within 48 hours.
TikTok: Long-press the video, tap "Report", and select "Fake engagement" or "Harassment and bullying". For intimate content, email legal@tiktok.com directly.
X (Twitter): Use the three-dot menu and select "Report post". For non-consensual intimate content, file at help.twitter.com.
Keep records of every report you file, including confirmation emails and reference numbers.
File a Police Report
If the deepfake is being used for fraud, harassment, blackmail, or sexual exploitation, file a police report.
For serious or organised offences: Contact the Australian Federal Police (AFP) via cyber.gov.au/report or call the AFP on 131 237. This includes deepfakes used in financial fraud, child exploitation material, or organised crime.
For personal harassment or threats: Contact your state or territory police. In Victoria, you can report online at police.vic.gov.au. In NSW, call Crime Stoppers on 1800 333 000 or report at police.nsw.gov.au.
Bring your saved evidence. The police report creates an official record, which strengthens any future legal action and supports your eSafety Commissioner complaint.
Your Legal Options in Australia
Image-based abuse laws make it a criminal offence to share intimate images without consent in every Australian state and territory. The federal Criminal Code Act 1995 (amended 2018) covers online distribution and carries penalties of up to 7 years imprisonment. Deepfake intimate content falls squarely under these laws -- the images don't need to be "real" photographs to be illegal.
State-level offences vary but all cover the core behaviour. In Victoria, the Summary Offences Act 1966 makes it an offence to distribute intimate images without consent, with penalties of up to 2 years imprisonment. NSW, Queensland, and other states have equivalent legislation.
Civil action. You can sue the creator for damages. Causes of action include breach of privacy (under the developing Australian tort), defamation (if the deepfake implies false facts about you), and intentional infliction of emotional distress. Talk to a lawyer who specialises in cyber-related claims. Many offer initial consultations at no cost.
Workplace and institutional liability. If a deepfake is created or shared by a colleague, the employer may also be liable under workplace health and safety laws. If it's created by a student, the school has obligations under duty of care.
Schools and Workplaces
If a student is targeted
Report to the school principal immediately. Schools have a duty of care and mandatory reporting obligations if the content involves a minor. The eSafety Commissioner has a dedicated process for content involving children -- reports about minors are fast-tracked.
Parents should file an eSafety Commissioner complaint in addition to notifying the school. If the creator is also a student, the school will handle disciplinary action, but the legal consequences can still be serious -- creating intimate deepfakes of a minor is a criminal offence regardless of the creator's age.
If a colleague is targeted
Report to your manager and HR department. If the deepfake was created or distributed by another employee, this is a workplace safety issue. The employer must investigate and take action under their harassment and bullying policies. If the employer fails to act, report to Safe Work Australia or your state's work health and safety regulator.
The targeted colleague should also file their own eSafety Commissioner complaint and consider a police report independently of the workplace process.
Support Services
1800RESPECT (1800 737 732) -- 24/7 counselling and support for anyone affected by image-based abuse, sexual assault, or domestic violence. Free and confidential.
eSafety Commissioner Support (esafety.gov.au) -- Dedicated case managers for image-based abuse complaints. They guide you through the reporting process and follow up with platforms on your behalf.
Lifeline (13 11 14) -- 24/7 crisis support and suicide prevention. If the situation is causing severe distress, call them.
Kids Helpline (1800 551 800) -- For young people aged 5 to 25. Free, private, and confidential phone and online counselling.
Legal Aid -- Every state has a Legal Aid office that provides free initial legal advice. Contact your state's Legal Aid for guidance on civil and criminal options.
Protect Yourself Going Forward
Audit your public photos. Deepfakes need source material. Review your social media profiles and set high-resolution photos to friends-only. The fewer public face images available, the harder it is to create a convincing deepfake of you.
Set up reverse-image alerts. Google Alerts won't catch images, but services like TinEye Alerts and Social Catfish offer ongoing monitoring for your photos appearing in new contexts online.
Talk to people around you. If you've been targeted, tell trusted friends, family, and colleagues. They may spot copies of the content on platforms you don't use. They can also report the content independently, which increases pressure on the platform to act.
Deepfakes are designed to make you feel powerless. You're not. Australian law is on your side. The eSafety Commissioner has real enforcement power. Platforms are under increasing pressure to act quickly. Take it step by step, save your evidence, and use every reporting channel available to you.