Reality Defender (YC W22) – Deepfake Detection Platform

Hi HN, we’re Ben, Gaurav and Ali from Reality Defender (https://www.realitydefender.ai). We help companies, governments, and journalists determine if media is real or fake, focusing on audio, video and image manipulation. Our API and web app provide real-time scanning, risk scoring, and PDF report cards.

Recent advancements in machine learning make it possible to create images, videos and audio of real people saying and doing things they never said or did. The recent spread of this technology has enabled anyone to create highly realistic deepfakes. Although some deepfakes are detectable to the eye by experienced observers who look closely, many people either don’t have experience or are not always looking closely—and of course the technology is only continuing to improve. This marks a leap in the ability of bad actors to distort reality, jeopardizing financial transactions, personal and brand reputations, public opinion, and even national security.

We are a team with PhD and Master degrees from Harvard, NYU and UCLA in data science. Between us, we have decades of experience at Goldman Sachs, Google, CIA, FDIC, Dept of Defense and Harvard University Applied Research at the intersection of machine learning and cybersecurity. But our current work began with a rather unlikely project: we tried to duplicate Deepak Chopra. We were working with him to build a realistic deepfake that would allow users to have a real-time conversation with “Digital Deepak” from their iPhones. Creating the Deepak deepfake was surprisingly simple and the result was so alarmingly realistic that we immediately began looking for models that could help users tell a synthetic version from the real thing.

We did not find a reliable solution. Frustrated that we’d already spent a week on something we thought would take our coffee break, we doubled down and set out to build our own model that could detect manipulated media.

After investigating, we learned why a consistently accurate solution didn’t exist. Companies (including Facebook and Microsoft) were trying to build their own silver-bullet, single-model detection methods—or, as we call it, "one model to rule them all." In our view, this approach will not work because adversaries and the underlying technologies are constantly evolving. For this same reason there will never be a single model to solve anti-virus, malware, etc.

We believe that any serious solution to this problem requires a “multi-model'' approach that integrates the best deepfake detection algorithms into an aggregate "model of models." So we trained an ensemble of deep-learning detection models, each of which focuses on its own feature, and then combined the scores.

We challenged ourselves to build a scalable solution that integrates the best of our deepfake detection models with models from our collaborators (Microsoft, UC Berkeley, Harvard). We began with a web app proof of concept, and quickly received hundreds of requests for access from governments, companies, and researchers.

Our first users turned to our platform for some deepfake scenarios ranging from bad to outright scary: Russian disinformation directed at Ukraine and the West; audio mimicking a bank executive requesting a wire transfer; video of Malaysia’s government leadership behaving scandalously; pornography where participants make themselves appear younger; dating profiles with AI-generated pro pics. All of these, needless to say, are completely fake!

As with computer viruses, deepfakes will continue evolving to circumvent current security measures. New deepfake detection techniques must be as iterative as the generation methods. Our solution not only accepts that, but embraces it. We quickly onboard, test, and tune third party models for integration into our model stack, where they can then be accessed via our web app and API. Our mission has attracted dozens of researchers who contribute their work for testing and tuning, and we’ve come up with an interesting business model for working together: when their models meet our baseline scores, we provide a revenue share for as long as they continue to perform on our platform. (If you’re interested in participating, we’d love to hear from you!)

We have continued to scale our web app and launched an API that we are rolling out to pilot customers. Currently the most popular use cases are: KYC onboarding fraud detection and voice fraud detection (ie. banks, marketplaces); and user-generated deepfake content moderation (ie. social media, dating platforms, news and government organizations).

We are currently testing a monthly subscription to scan a minimum of 250 media assets per month. We offer a 30 day pilot that converts into a monthly subscription. If you’d like to give it a try, go to www.realitydefender.ai, click “Request Trial Access” and mention HN in the comments field.

We’re here to answer your questions and hear your ideas, and would love to discuss any interesting use cases. We’d also be thrilled to collaborate with anyone who wants to integrate our API or who is working, or would like to work, in this space. We look forward to your comments and conversation!



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc s24 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov