McAfee Unveils Advanced Deepfake Audio Detection Technology at CES 2024 to Defend Against Rise in AI-Generated Scams and Disinformation

McAfee Unveils

Bengaluru, India –9th January 2024 Today, McAfee Corp., a global leader in online protection, announced its AI-powered Deepfake Audio Detection technology, known as Project Mockingbird, at the Consumer Electronics Show. This new, proprietary technology was developed to help defend consumers against the surging threat of cybercriminals utilizing fabricated, AI-generated audio to carry out scams that rob people of money and personal information, enable cyberbullying, and manipulate the public image of prominent figures.

Increasingly sophisticated and accessible Generative AI tools have made it easier for cybercriminals to create highly convincing scams, such as using voice cloning to impersonate a family member in distress, asking for money. Others, often called “cheap fakes,” may involve manipulating authentic videos, like newscasts or celebrity interviews, by splicing in fake audio to change the words coming out of someone’s mouth; this makes it appear that a trusted or known figure has said something different than what was originally said.

Anticipating the ever-growing challenge consumers face in distinguishing real from digitally manipulated content, McAfee Labs, the innovation and threat intelligence arm at McAfee, has developed an industry-leading advanced AI model trained to detect AI-generated audio. McAfee’s Project Mockingbird technology uses a combination of AI-powered contextual, behavioral, and categorical detection models to identify whether the audio in a video is likely AI-generated. With a 90% accuracy rate currently, McAfee can detect and protect against AI content that has been created for malicious “cheapfakes” or deep fakes, providing unmatched protection capabilities to consumers.

“With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems,” said Steve Grobman, Chief Technology Officer, McAfee. “So, much like a weather forecast indicating a 70% chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

“The use cases for this AI detection technology are far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation. With McAfee’s deepfake audio detection capabilities, we’ll be putting the power of knowing what is real or fake directly into the hands of consumers. We’ll help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time giveaway, and also make sure consumers know instantaneously when watching a video about a presidential candidate, whether it’s real or AI-generated for malicious purposes. This takes protection in the age of AI to a whole new level. We aim to give users the clarity and confidence to navigate the nuances in our new AI-driven world, to protect their online privacy and identity, and well-being,” continued Grobman.

McAfee is building on its rich history of AI innovation, the first public demos of Project Mockingbird, McAfee’s Deepfake Audio Detection technology, will be available onsite at the Consumer Electronics Show 2024. The unveiling of this new AI technology is also further evidence of McAfee’s focus on developing a comprehensive portfolio of AI models that are cross-platform and serve multiple use cases to safeguard consumers’ digital lives.

Why Project Mockingbird

Mockingbirds are a group of birds primarily known for mimicking or “mocking” the songs of other birds. While there’s no proven reason as to why Mockingbirds mock, one theory behind the behavior is that female birds may prefer males who sing more songs, so the males mock to trick them. Similarly, cybercriminals leverage Generative AI to “mock” or clone the voices of celebrities, influencers, and even loved ones to defraud consumers.

Deep Concerns about Deepfake Technology

Consumers are increasingly concerned about the sophisticated nature of these scams, as they no longer trust that their senses and experiences are enough to determine whether what they’re seeing or hearing is real or fake.

For over a decade, McAfee has used AI to safeguard millions of global customers from online privacy and identity threats. By running multiple models in parallel, McAfee can perform a comprehensive analysis of problems from multiple angles. For example, structural models are used to understand the threat types, behavior models to understand what that threat does, and contextual models to trace the origin of the data underpinning a particular threat. Utilizing multiple models concurrently allows McAfee to provide customers with the most effective information and recommendations and reinforces the company’s commitment to protecting people’s privacy, identity, and personal information.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Nothing in this document shall be considered an offer by McAfee, create obligations for McAfee, or create expectations of future releases which impact current purchase or partnership decisions.

Leave a Reply