The Deep Media Deepfake Census
March 1, 2024
Photo by Martin Martz on Unsplash
When you’re developing AI detection models, it can be really easy to get lost in the world of numbers and parameters, and you can forget why exactly you’re doing what you’re doing. In the world of Deepfake detection, we’re doing this because it’s our job, and because we know we can make the world a safer and a better place. Concretely, we believe that we can reduce the spread of misinformation through AI-generated content on social media and beyond, and we at Deep Media believe it’s time to prove it.
Most AI-generated models are trained and evaluated on internal data. It isn’t often that we’re able to take a step back and see how our work fares against the state of the art of deepfake generation, the kind of images and videos that are being shared on social media and doing active harm to our ability to believe what we see on the internet. For this reason, we are introducing the Deep Media Deepfake Census, a robust analysis of the types and varieties of deepfake videos that exist on social media and news platforms, and how our DeepID detectors perform on these publicly available images, videos, and audios.
Our team is embarking on an ambitious project to collect thousands of samples from Instagram, Twitter, Facebook, news sources, and Reddit. These samples will be compiled into golden sets.
A golden set is a meticulously curated collection of real-world examples, encompassing images, videos, and audios that represent the cutting-edge of what’s being shared and circulated on social media and news platforms. These sets are not just random assortments of data; they are the epitome of the types of content that are actively being used to manipulate public perception on a daily basis.
From celebrity deepfakes that blur the lines between reality and fiction, to politically motivated videos that seek to sway opinions and actions, these golden sets are a mirror to the state of the art in content manipulation. By focusing on these real-world examples, we ensure that our DeepID detectors are tested against the most challenging and relevant scenarios, providing a true measure of their effectiveness in the ongoing battle against misinformation.
Importantly, our detectors have not been trained on any of this data. This is a deliberate choice to test the robustness and adaptability of our models in the wild. By doing so, we aim to demonstrate the effectiveness of our detection algorithms in identifying deepfakes that they have never encountered before.
Starting next week, we will release the performance metrics of our detectors on Instagram data. Following that, we will sequentially release the results for Twitter, Facebook, Reddit, and news sources. We will begin with images, then move on to videos, and finally audios. This phased approach will importantly allow us to thoroughly analyze our detectors’ performance across different media types and platforms.
The Deep Media Deepfake Census is not just an exercise in data collection and analysis. It’s a statement of our commitment to transparency and accountability in the fight against deepfake technology. By sharing our findings with the public, we hope to foster a greater understanding of the challenges we face and the progress we are making in safeguarding the truth in our digital age. Stay tuned for our upcoming releases, and we encourage everyone, in the tech space or outside, to join us in this crucial endeavor to protect the integrity of our increasingly online world.