close
close

How GPT-4o defends your identity against AI-generated deepfakes – DNyuz

How GPT-4o defends your identity against AI-generated deepfakes – DNyuz

How GPT-4o defends your identity against AI-generated deepfakes – DNyuz

Deepfake incidents are on the rise in 2024 and are expected to increase by 60% or more this year, pushing the number of global cases to 150,000 or more. That makes AI-powered deepfake attacks the fastest growing type of adversarial AI today. Deloitte predicts that deepfake attacks will cause more than $40 billion in damage by 2027, with banking and financial services being the main targets.

AI-generated voice and video fabrications are blurring the lines of credibility and eroding trust in institutions and governments. Deepfake trafficking is so pervasive in national cyberwarfare organizations that it has reached the maturity of an attack tactic in cyberwarfare countries that are in constant battle with each other.

“In today’s elections, developments in AI, such as generative AI or deepfakes, have evolved from mere disinformation to advanced tools for deception. AI has made it increasingly difficult to distinguish between real and fabricated information,” Srinivas Mukkamala, chief product officer at Ivanti, told VentureBeat.

Sixty-two percent of CEOs and senior managers believe deepfakes will cause at least some business costs and complications for their organization over the next three years, while 5% see it as an existential threat. Gartner predicts that by 2026, attacks using AI-generated deepfakes on facial biometrics will result in 30% of companies no longer considering such identity verification and authentication solutions as reliable on their own.

“Recent research from Ivanti shows that more than half of office workers (54%) are unaware that advanced AI can imitate someone’s voice. This statistic is worrying as these individuals will participate in the upcoming elections,” Mukkamala said.

The US Intelligence Community’s 2024 Threat Assessment states that “Russia is using AI to create deepfakes and developing the ability to fool experts. Individuals in war zones and unstable political environments can serve as some of the most valuable targets for such deepfake malign influence.” Deepfakes have become so common that the Department of Homeland Security has released a guide, Increasing Threats from Deepfake Identities.

How GPT-4o is designed to detect deepfakes

OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni-model, which accepts any combination of text, audio, image and video as input,” as described in the system map published on August 8. OpenAI writes: “We only allow the model certain pre-selected voices and use an output classifier to detect if the model deviates from them.”

Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Notable is the amount of red teaming performed on the model, which is among the most comprehensive AI model releases of recent generation in the entire industry.

All models must continually train and learn from attack data to maintain their edge, and that’s especially true when it comes to keeping up with attackers’ deepfake craft, which has become indistinguishable from legitimate content.

The following table explains how GPT-4o features help identify and stop deepfakes in audio and video.

Key GPT-4o capabilities for detecting and stopping deepfakes

Key features of the model that strengthen its ability to identify deepfakes include:

Detection of generative adversarial networks (GANs). The same technology that attackers use to create deepfakes, GPT-4o, can identify synthetic content. OpenAI’s model can identify previously unnoticeable discrepancies in the content generation process that even GANs cannot fully replicate. An example is how GPT-4o analyzes errors in the way light interacts with objects in video footage or inconsistencies in the pitch of voices over time. 4o’s GOOSE detection highlights these small defects that are not noticeable to the human eye or ear.

GANs usually consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The purpose of the generator is to improve the quality of the content to deceive the discriminator. This advanced technique creates deepfakes that are almost indistinguishable from real content.

Voice authentication and output classifiers. One of the most valuable features of GPT-4o’s architecture is its voice authentication filter. The filter cross-references each generated vote with a database of pre-approved, legitimate votes. What’s fascinating about this capability is how the model uses neural vocal fingerprints to track more than 200 unique characteristics, including pitch, cadence, and accent. GPT-4o’s output classifier immediately terminates the process if an unauthorized or unrecognized voting pattern is detected.

Multimodal cross-validation. OpenAI’s system map defines this capability extensively within the GPT-4o architecture. 4o works in real time via text, audio and video input, cross-validating multimodal data as legitimate or not. If the audio does not match the expected text or video context, the GPT4o system will flag it. Red teamers found this to be especially crucial for detecting AI-generated lip sync or attempts at video impersonation.

The number of deepfake attacks on CEOs is increasing

Of the thousands of CEO deepfake attempts this year alone, the attempt against the CEO of the world’s largest advertising agency shows just how sophisticated attackers are becoming.

Another is an attack that took place over Zoom with multiple deepfake identities on the phone, including the company’s CFO. A finance executive at a multinational company was reportedly tricked into approving a $25 million transfer by a deepfake of their CFO and senior staff during a Zoom call.

In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity professionals defend systems, while also commenting on how attackers are using them. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 US elections and the threats from China and Russia.

“And if now, in 2024, with the ability to make deepfakes, and some of our in-house guys made some funny parody videos with me and it was just to show me how scary it is, you wouldn’t be able to say that it wasn’t me in the video,” Kurtz told the WSJ. “So I think that’s one of the areas that I’m really concerned about. There are always concerns about infrastructure and things like that. In those areas, voting is still largely on paper and the like. Some of it isn’t, but how you create the false narrative to get people to do things that a nation state wants them to do is the area that really concerns me.”

The crucial role of trust and security in the AI ​​era

OpenAI’s prioritization of design goals and an architectural framework that foregrounds defect detection of audio, video, and multimodal content reflect the future of generational AI models.

“The rise of AI over the past year has brought the importance of trust in the digital world to the forefront,” said Christophe Van de Weyer, CEO of Telesign. “As AI continues to evolve and become more accessible, it is critical that we prioritize trust and security to protect the integrity of personal and institutional data. At Telesign, we are committed to using AI and ML technologies to combat digital fraud, ensuring a more secure and reliable digital environment for everyone.”

VentureBeat expects OpenAI to expand GPT-40’s multimodal capabilities, including voice authentication and deepfake detection via GANs to identify and eliminate deepfake content. As businesses and governments increasingly rely on AI to improve their operations, models like GPT-4o are becoming indispensable in securing their systems and ensuring digital interactions.

Mukkamala emphasized to VentureBeat: “When all is said and done, however, skepticism is the best defense against deepfakes. It is essential to avoid taking information at face value and to critically assess its authenticity.”

The post How GPT-4o defends your identity against AI-generated deepfakes appeared first on Venture Beat.