North Korean hackers posed as recruiters on LinkedIn—with a level of polish that Michael Shaulov, Fireblocks CEO says is now "way harder to detect because of AI." They ran professional interviews. Sent polished PDFs and GitHub repos with "coding assignments." When candidates ran the code? Malware—targeting keys and production access. What SIEM rule will catch this? You'd need detection engineers to anticipate a complex business risk and these problems are going to get more complex. Sophisticated infiltration by nation state attackers is becoming commonplace and we saw this at a major cybersecurity company recently where their corporate laptop was being operated by an "insider/employee" out of the a laptop farm in Washington controlled from North Korea. Quite a similar insider attack at the largest fabless semiconductor company. Who will be next? At Exaforce, we don't translate every attack scenario into rules. We understand business outcomes—identity risks, privileged access anomalies, insider threats. SIEMs see events whereas Exaforce builds a semantic understanding of data and identities and then we detects threats from a business risk perspective. Last year, we partnered with Srijan R Shetty, CTO of Fuze to build a semantic understanding of their environment - Amazon Web Services (AWS), GitHub, Fireblocks, Google, and Microsoft EntraID - to ensure that more of the #crypto industry can be protected as the attacks get more sophisticated. Exaforce delivers the best understanding, visibility and protection of usage of Fireblocks in your environment! https://lnkd.in/gkQXsuwV
This is a fascinating and alarming example of how far social engineering has evolve with AI in the loop. The sophistication in these fake “recruiter” engagements really blurs the line between technical and human threat vectors. I like how Exaforce is focusing on semantic and identity-level understanding rather than just rule-based detections. That is exactly where the industry needs to go.