Deepfake Technology Issues

Explore top LinkedIn content from expert professionals.

  • View profile for Henry Ajder
    Henry Ajder Henry Ajder is an Influencer

    AI and Deepfake Cartographer

    16,534 followers

    This is a significant move in consumer deepfake protection: Chinese smartphone brand Honor introduces native deepfake detection for video calls. Announced last year but globally available from April, Honor claims they can identify suspected synthetic content in live video calls within six seconds. Using continuous frame-by-frame monitoring, Honor's detection analyses discrepancies in "eye contact, lighting, image clarity, and video playback". If suspected synthetic content is detected, users automatically receive a pop-up warning, like anti-virus software or on web browsers when accessing a website without a valid SSL certificate. The anti-virus framing for detection is understandably appealing- a seamless (but not infallible) protective layer between users and content on social media, video calls, or even suspected AI-generated emails. It's encouraging to see big consumer tech companies taking the risk of deepfakes seriously and looking to protect users with this integrated approach, but caveats do still apply: 🔎 It's unclear how the increasing use of filters or other benign synthetic effects may impact the triggering of alerts/detection. 🔎 A reliability benchmark hasn't been shared, nor has any red teaming/robustness testing. As usual, unreliable and unevolving detection often does more harm than good... 🔎 Research is still needed to understand if these notifications are meaningful interventions in a live conversational context. Too many false positives and the 'crying wolf' effect may also feed notification fatigue. Still, I'm confident Honor won't be the last smartphone company to introduce these native detection capabilities. Deepfake fraud numbers have skyrocketed (one study found a 2137% increase in the last three years), and AI-generated content continues to grow more pervasive and sophisticated. I wouldn't be surprised if these features become key product differentiators moving forward, particularly for corporate customers where security is the ultimate priority.

  • View profile for Jason Rebholz
    Jason Rebholz Jason Rebholz is an Influencer

    I secure the agentic workforce | CISO, AI Advisor, Speaker, Mentor

    31,919 followers

    There’s more to the $25 million deepfake story than what you see in the headlines. I pulled the original story to get the full scoop. Here are the steps the scammer took: 1. The scammers sent a phishing email to up to three finance employees in mid-January, saying a “secret transaction” had to be done. 2. One of the finance employees fell for the phishing email. This led to the scammers inviting the finance employee to a video conference. The video conference included what appeared to be the company CFO, other staff, and some unknown outsiders. This was the deep fake technology at work, mimicking employees' faces and voices. 3. On the group video conference, the scammers asked the finance employee to do a self-introduction but never interacted with them. This limited the likelihood of getting caught. Instead, the scammers just gave orders from a script and moved on to the next phase of the attack. 4. The scammers followed up with the victim via instant messaging, emails, and one-on-one video calls using deep fakes. 5. The finance employee then made 15 transfers totaling $25.6 million USD. As you can see, deep fakes were a key tool for the attacker, but persistence was critical here too. The scammers did not let up and did all that they could to apply pressure on the individual to transfer the funds. So, what do businesses do about mitigating this type of attack in the age of deep fakes? - Always report suspicious phishing emails to your security team. In this context, the other phished employees could have been an early warning that something weird was happening. - Trust your gut. The finance employee reported a “moment of doubt” but ultimately went forward with the transfer after the video call and persistence. If something doesn’t feel right, slow down and verify. - Lean into out-of-band authentication for verification. Use a known good method of contact with the individual to verify the legitimacy of a transaction. - Explore technology driven identify verification platforms for high dollar wire transfers. This can help reduce the chance of human error. And one of the best pieces of advice I saw was from Nate Lee yesterday, who called out building a culture where your employees are empowered to verify transaction requests. Nate said the following “The CEO/CFO and everyone with power to transfer money needs to be aligned on and communicate the above. You want to ensure the person doing the transfer doesn't feel that by asking for additional validation that they're pushing back against or acting in a way that signals they don't trust the leader.” Stay safe (and real) out there. ------------------------------ 📝 Interested in leveling up your security knowledge? Sign up for my weekly newsletter using the blog link at the top of this post.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    59,244 followers

    There’s a pretty good chance that the shocking rate at which AI is advancing is out-pacing your cyber security training, policies and maybe even technologies. Have you addressed the use of AI and deep fakes in your cyber security policies? In a recent and alarming development that seems to have leapt straight from the pages of a science fiction novel, a Hong Kong based finance worker at a multinational firm was defrauded of $25 million, falling victim to an elaborate scam that employed deepfake technology to impersonate the company's CFO. This incident, which unfolded during a video conference call, marks a disturbing milestone in the intersection of cybercrime and AI, underscoring the urgent imperative for companies to bolster their cybersecurity frameworks, particularly against the backdrop of deepfake technology. The mechanics of the scam were deceptively simple yet devastatingly effective. The finance employee was lured into a video call with several participants, believed to be colleagues and the CFO, only to discover later that each participant was a digital fabrication. The deepfake avatars, mirroring the appearance and voice of real company personnel, instructed the employee to initiate a "secret transaction", leading to the unauthorised transfer of $25.6 million. This incident is not an isolated event but rather a harbinger of the potential threats posed by AI-driven disinformation and fraud. The use of deepfake technology to bypass facial recognition software, impersonate individuals for fraudulent purposes, and undermine the integrity of personal and corporate identities presents a clear and present danger. The case in Hong Kong, where fraudsters successfully manipulated digital identities to orchestrate financial theft, exemplifies the sophistication of contemporary cybercrime. The implications of this event extend far beyond the immediate financial loss. It serves as a stark reminder of the vulnerabilities inherent in digital communication platforms and the necessity for robust verification processes. The reliance on video conferencing and digital communication, accelerated by the global pandemic, has exposed systemic weaknesses ripe for exploitation. In response to this escalating threat, it is incumbent upon companies to adopt comprehensive cybersecurity strategies that address the unique challenges posed by deepfake technology. This includes implementing advanced authentication protocols, raising awareness and training employees on the potential risks of deepfakes, and deploying AI-driven security measures capable of detecting and neutralising synthetic media. As AI output become increasingly indistinguishable from reality, the line between authentic and artificial communication will blur, challenging individuals and organisations to navigate a new frontier of digital authenticity. It compels a reevaluation of the assumptions underpinning digital trust and identity verification, urging a proactive approach to cyber defence.

  • View profile for Tom Vazdar

    Principal Consultant | Cybersecurity & AI (Governance, Risk & Compliance) | CEO @ Riskoria | Media Commentator on Cybercrime & Digital Fraud | Creator of HeartOSINT

    9,978 followers

    What happens when deepfake technology becomes a service anyone can buy? I've been tracking the Deepfakes-as-a-Service market, and the numbers are alarming. Deepfake fraud attempts jumped 1,300% in 2024. From one attack per month to seven per day. Here's what keeps me up at night: The February 2024 Arup case. A finance employee joined a video call with the CFO and several colleagues. Everyone looked real. Everyone sounded real. The employee authorized $25.6 million in wire transfers. Every single person on that call was AI-generated. This wasn't some nation-state operation. Underground marketplaces now offer deepfake creation as a point-and-click service. No technical skills required. Just cryptocurrency and malicious intent. The psychology is what makes it work. We're wired to trust what we see and hear, especially when it matches our expectations. A realistic video of your CFO making a familiar request triggers immediate credibility. By the time you think to question it, the money's gone. Traditional defenses aren't enough anymore: → Voice verification systems can be defeated → Video calls don't guarantee authenticity → Even following verification procedures can fail Organizations need multi-channel verification protocols. If someone requests a wire transfer on video, verify through a completely separate channel. Code words. Challenge-response systems. Procedural friction on high-risk transactions. But here's the problem: 99% of security leaders say they're confident in their deepfake defenses. Only 8.4% actually scored above 80% in detection tests. We think we're protected when we're actually vulnerable. Have you updated your verification procedures for the deepfake era? #Cybersecurity #AISecurity #DeepfakeFraud #DigitalRisk #FraudPrevention

  • View profile for Arockia Liborious
    Arockia Liborious Arockia Liborious is an Influencer
    39,201 followers

    The New Corporate Threat: Deepfakes That Even Experts Can't Detect Welcome to the new reality where AI doesn’t just generate content, it manufactures convincing lies. You’ve probably seen it: - A CEO announces a fake acquisition. - A politician "says" something they never did. - A voice note "from your boss" requests a fund transfer. It all looks real. But it’s not. It’s a deepfake AI-generated audio, video, or images designed to deceive. Why it matters: Deepfakes are no longer just internet tricks or entertainment. They’re now: - Financial fraud enablers (voice clones used to scam employees) - Corporate risk vectors (fake news impacting stock prices) - Political weapons (manipulated clips used to sway public opinion) - Personal threats (identity misuse, blackmail, defamation) How to spot a deepfake  Look for: - Unnatural blinking or awkward lip sync - Plastic skin or weird lighting - Robotic tone or emotionless speech - Out-of-character statements - No credible source backing the video If it feels off, it probably is. What you can do: - Pause before sharing - Use tools like Deep ware, Microsoft Video Authenticator, or Adobe Verify - Train your teams especially PR, legal, and finance - Push for content provenance in your organization In the GenAI era, trust is currency. Don’t spend it on content you didn’t verify. #artificialintelligence

  • View profile for Ben Colman

    CEO at Reality Defender | 1st Place RSA | JP Morgan Hall of Innovation | Ex-Goldman Sachs, Google, YCombinator

    20,692 followers

    Microsoft's case against illicit AI developers confirms what we at Reality Defender have tracked for years: deepfake impersonation has evolved from theoretical concern to sophisticated criminal enterprise targeting vulnerable individuals daily and much more frequently than last year. While those of us with good BS detectors (and, yes, inference-based deepfake detection) are able to spot celebrity deepfakes from a mile away, these deceptive creations continue to be remarkably effective at defrauding everyday people. The financial impact is substantial, to say the least, and the aftermath of these scams extends beyond financial loss. Most importantly, when someone transfers retirement savings to a deepfaked "Elon Musk" investment scheme or sends money to an AI-generated "Brad Pitt," the profound shame often prevents victims from reporting these incidents — creating a dangerous gap in our understanding of the true scale of this crisis. What makes this trend particularly concerning is the organizational sophistication behind these operations. We're seeing structured criminal networks with specialized roles: technical developers creating the AI tools, others perfecting impersonation techniques, and frontline operators executing the financial fraud with increasing effectiveness. At Reality Defender, we partner with financial institutions to implement proactive protection against a related threat — deepfake impersonations of legitimate account holders attempting to breach security systems and conduct unauthorized transactions. These attacks threaten both individual finances and institutional reputational integrity, and like the victims of celebrity deepfake impersonations, are far more common than reported. As generative AI technology becomes even more accessible, we remain committed to sharing our insights while respecting victim privacy. Chances are high that your organization faces AI impersonation risks you haven't yet considered. Reality Defender's proactive detection measures can help you identify these vulnerabilities and implement robust safeguards before your customers or employees become victims. 

  • View profile for Jeremy Tunis

    “Urgent Care” for Public Affairs, PR, Crisis, Content. Deep experience with BH/SUD hospitals, MedTech, other scrutinized sectors. Jewish nonprofit leader. Alum: UHS, Amazon, Burson, Edelman. Former LinkedIn Top Voice.

    15,984 followers

    Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    20,460 followers

    Fraud no longer hides in the shadows. It might show up disguised as someone you know. Like when the CEO calls and her voice on the phone sounds exactly right. Her urgency feels real, and the wire transfer request to a new bank account seems legitimate, so accounting releases the funds. And just like that, the company loses $20k to a fraudster who weaponized AI. This isn't science fiction. It's happening right now to individuals and organizations alike. Fraudsters are creating disturbingly real AI deepfakes that can fool even the most cautious people. And companies need strategies to combat them. Because those audio and visual cues we've relied on for decades are no longer reliable indicators of authenticity when it comes to AI deepfakes. Organizations can fight back with these defense strategies: ✔ Stay cautious and be wary of anyone requesting money or personal information, even if they look or sound like someone you trust. ✔ Don’t send money or share sensitive data in response to a single phone or video call. Phone numbers can be spoofed, so always verify a person’s identity by contacting them separately at a number you trust. ✔ Use small action requests, like asking a person to turn their head, blink repeatedly, or hum a song while on a video or phone call. If they decline, freeze up, or go silent, it could be a fraudster. ✔ Establish a safe word that only your inner circle knows to confirm the identity of someone claiming to be a colleague, family member, or friend.   ✔ Use strong passwords. Enable multifactor authentication (MFA) on all company devices and accounts whenever possible. And don’t forget to report AI deepfakes to law enforcement and any relevant social media channels, websites, and other platforms where the encounter took place. All of these tips ALSO work for individuals too because hackers like causing havoc with anyone they can. The question isn't whether AI deepfakes will target your organization. It's whether your organization will be ready when it does.   Food for thought as we kick off Cybersecurity Awareness Month.   ♻ Share our infographic to help companies combat AI deepfakes. 

  • View profile for Adnan Amjad

    US Cyber Leader at Deloitte

    4,236 followers

    Deepfake-related fraud is increasingly omnipresent. Singular points of security are no longer reliable enough – especially for high-stakes environments like financial service organizations, as a recent The Wall Street Journal article featuring Deloitte’s Anish Srivastava explains (https://deloi.tt/4nlto2c).   To address these complex and evolving threats, banks and financial institutions should implement multi-layered security “defense-in-depth" strategies that can proactively detect, mitigate, and respond to deepfake threats and restore trust.  Organizations can implement multiple layers of security to protect against deepfakes, including secure user onboarding, contextual analysis, media liveness confirmation, strong authentication and session binding measures, and deepfake detection AI.      Maintaining deepfake protection requires ongoing employee training, regular security audits, continuous monitoring of emerging threats, and prompt response to incidents.  

  • View profile for Philip Coniglio
    Philip Coniglio Philip Coniglio is an Influencer

    President & CEO @ AdvisorDefense | Cybersecurity Expert

    13,860 followers

    Deepfake Dominance in Cybercrime. We’ve crossed a tipping point: 40% of phishing campaigns are now AI-powered. Threat actors are extracting as much as $81,000 from a single victim using deepfake-enhanced tactics. Emails, calls, and even video conferences can now be convincingly AI-generated. This means traditional “spot the red flag” awareness training is no longer enough. Trusting your eyes or ears alone is no longer safe in a world where fraudsters can impersonate anyone. Zero Trust must extend to human identity verification. Confirm unexpected requests for money, credentials, or sensitive data through an out-of-band channel. Layer your controls. MFA, identity verification callbacks, and vendor authentication into daily workflows. Reinforce to employees that hesitation and validation are strengths, not weaknesses. At AdvisorDefense, we’re preparing RIAs for a reality where cybercrime isn’t just about malware, it’s about manipulation. If 40% of phishing is already AI-driven, the question is: how will your firm adapt before the other 60% gets there too? #AdvisorDefense #RIA #Cybersecurity #ZeroTrust

Explore categories