FakeNet AI

Blocking Manipulated Media Content Through Advanced Detection Algorithms

The first time Raymond Lee, a 2019 graduate of UC Berkeley’s Master of Information and Data Science program, saw a Deepfake, an AI-generated video in which a person in an existing video is replaced with someone else’s likeness, he was surprised at how real it seemed.

Generative Adversarial Networks (GANs), the underlying technology used to create Deepfakes, are improving at a rapid pace. In a future where anyone can be made to appear to say or do anything, how will we know what to believe? he asked. 

Concerned about the harm that could result from manipulated news media and social media content, Lee and his team of Data Science, EECS, and Business Administration students launched FakeNetAI, a Deepfake detection SaaS, software as a service, that aims to “protect against economic, societal, and political threats.” 

The urgency for a novel detection technology comes from an impending increase of Deepfake videos, Lee explained, “that if left unchecked, will flood our newsfeeds and cause us to question everything we see or hear on the internet.” The FakeNetAI team found in a self-conducted survey that only 59% of respondents could tell a Deepfake from a real video — a percentage they imagine will decrease as Deepfakes become easier to create and more realistic.

“As of now, we lack automated ways to detect Deepfakes in a reliable and scalable fashion,” said UC Berkeley Computer Science Professor Dawn Song in a Forbes article. “It will be an arms race between those that create Deepfakes and those that seek to detect them.”. 

FakeNetAI generates novel Deepfakes used to train their machine learning algorithm and improve its detection capabilities. They also perform “red teaming” exercises in which they seek to identify, challenge, and improve upon vulnerabilities. Through these efforts, they have built strong detection capabilities against attacks, enabling them to “stay ahead of Deepfake attacks,” said Lee.

FakeNetAI recently entered into an agreement to deliver a beta product to one of the largest enterprises in the world, and has begun product iteration discussions with several other large enterprises and news media outlets, Lee explained. Their product offers an accuracy of over 95% and has been able to detect notable Deepfakes such as the Obama “Puppetmaster”, Mark Zuckerberg “SPECTRE”, Elon Musk “Zoom-bomb”, and the President Nixon MIT “Moon Disaster” fakes.

Deepfake pornography has already caused harm to several victims, and a potential Deepfake even sparked a coup in Gabon, Lee said. “FakeNetAI will stop the harmful proliferation of Deepfakes… It will restore online trust for all.”

Live pitches on Sept 23!

Click the image to RSVP