back to top
HomeTechYouTube Will Search for Deepfakes of You. All It Needs Is a...

YouTube Will Search for Deepfakes of You. All It Needs Is a Video of Your Face.

- Advertisement -

For most of YouTube’s history, if someone uploaded a convincing fake of you, your options were limited. File a report, hope someone reviewed it, wait. The tools that actually worked, the ones that proactively scanned for your likeness across millions of uploads were reserved for verified creators, then politicians, then journalists and then celebrities.

As of now, that changes. YouTube is opening likeness detection to anyone over 18. Zero subscribers, no verification badge or public profile required. If your face ends up in an AI-generated video you never agreed to, YouTube will now look for it.

That’s the good part but there is another part you should know before you enroll.

YouTube changed who gets protected

The feature didn’t start here. YouTube began testing likeness detection with a small group of content creators, then expanded to government officials, politicians, journalists and entertainment figures. Each expansion was framed as a response to where deepfake harm was most visible.

The jump to everyone is different in kind, not just in scale. A creator with ten videos and a few hundred subscribers now has access to the same detection dashboard as a verified journalist or a celebrity with millions of followers. YouTube spokesperson Jack Malon made the scope explicit: there are no requirements on what constitutes a creator who is eligible. “Whether creators have been uploading to YouTube for a decade or are just starting, they’ll have access to the same level of protection,” he said.

You have to give YouTube your face first

To enroll, you provide a government-issued ID and a short selfie video. YouTube uses the selfie as a reference point, the baseline your face gets measured against when the system scans new uploads. If it finds a visual match, you get alerted in YouTube Studio and can decide what to do with it.

It seems simple but the catch is what happens to that data afterward. YouTube says your likeness template, legal name and selfie video can be stored for up to three years from your last sign-in. You can withdraw consent and request deletion, but the window is long. The company also says it won’t use enrollment data to train Google’s generative AI models without your explicit consent, which is the right thing to say, though “without consent” is doing a lot of work in that sentence depending on how the consent flow is actually designed.

To be fair, YouTube isn’t asking for more than what’s necessary for the feature to function. You can’t scan for someone’s face without a reference. But handing a government ID and biometric data to Google, a company whose entire business runs on understanding who you are and what you do is a decision worth making consciously.

You May Like: ChatGPT Wants Access to Your Bank Account

This is not a deepfake kill switch

If you enroll expecting YouTube to automatically scrub every fake version of you from the internet, the reality is more modest than that.

The system flags potential matches. You review them. Then you decide whether to archive the content, file a copyright claim if your original footage was reused, or submit a privacy complaint. YouTube evaluates removal requests against a list of factors, whether the content is realistic, whether you’re uniquely identifiable, whether it’s labeled as AI-generated, and whether it qualifies as parody, satire or something in the public interest.

A blanket auto-removal system would immediately collide with commentary, journalism and fair use, so human judgment staying in the loop makes sense. But it also means protection isn’t instant and it isn’t guaranteed. A video can sit up while a complaint works through review. The system can miss things. And it currently covers only facial likeness, voice cloning isn’t included yet, though YouTube says audio detection is coming later this year. Given that voice cloning is already a standard part of the fraud playbook.

Think of it as an early warning system rather than a shield. Better than nothing. Meaningfully better than manually searching for copies of your own face across millions of videos. But not the end of the problem.

You May Like: Open Source Tools That Turn Your PC Into a Full Creator Studio

Deepfakes stopped being a celebrity problem

The original deepfake panic was about famous people. Actors, politicians, executives, people with enough public footage to train a model on and enough name recognition to make a fake worth spreading.

It doesn’t anymore. Teenagers are being deepfaked by classmates. Three teenagers recently sued xAI alleging that Grok generated child sexual abuse material of them. The tools that once required serious technical skill and hours of footage are now accessible enough that the threat has moved well down from public figures into ordinary private life.

YouTube opening this feature to everyone is the platform implicitly acknowledging that reality. The deepfake problem scaled down faster than the protections did, and this is an attempt to solve this problem. It won’t solve it completely, no single tool does but the direction is right.

The next question is whether other platforms follow. TikTok, Instagram, X and LinkedIn all host identity-driven content and all have users who can be harmed by synthetic impersonation. YouTube moving first creates a visible standard. Weaker controls elsewhere will become harder to defend once users know what’s possible.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
ChatGPT Wants Access to Your Bank Account

ChatGPT Wants Access to Your Bank Account

0
OpenAI did this with your health data in January. Now it wants your financial data too. The company announced today that ChatGPT users can connect their bank accounts through Plaid, the financial bridging platform used by 12,000 institutions including Chase, Fidelity, Capital One, and Schwab. Once connected, ChatGPT gets a full view of your balances, transaction history, active subscriptions, investment portfolio, and liabilities like mortgages and credit card debt. In return you get a spending dashboard, personalized financial advice, and a chatbot that can flag unusual changes in your habits. It's launching in preview for Pro subscribers at $200 a month. OpenAI says Plus and eventually everyone else comes later.
OpenAI Wanted Distribution on the iPhone. Apple Had Other Plans

OpenAI Wanted Distribution on the iPhone. Apple Had Other Plans.

0
OpenAI was supposed to become part of the iPhone experience. Apple would finally have an AI answer for Siri. ChatGPT would sit in front of hundreds of millions of users. It didn't work out that way. According to Bloomberg, OpenAI has brought in outside legal counsel to explore its options, including a formal breach-of-contract notice against Apple. The integration that was supposed to funnel billions in new subscriptions toward ChatGPT instead got buried deep enough that most iPhone users probably don't know it exists. One OpenAI executive put it plainly: "They basically said, 'OpenAI needs to take a leap of faith and trust us.' It didn't work out well."
Google's Gemini Omni Can Write Math on a Chalkboard. AI Video's Hardest Problem May Be Getting Easier

Google’s Gemini Omni Can Write Math on a Chalkboard. AI Video’s Hardest Problem May...

0
Google hasn't announced Gemini Omni. A reddit user just found it anyway. Someone opened their Gemini app, got a pop-up for a model they'd never heard of, and started generating video. What came out has been making rounds on Reddit for the past few days, and the reaction has been "this is scary." The chalkboard video is why.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy