Announcing our $5M seed roundLearn more

Well-Done Tofu

Thoughtfully prepared ideas and releases from and by our team.

How Fraudulent Applicants are Targeting your Security Team : Hiring in April 2026

How Fraudulent Applicants are Targeting your Security Team : Hiring in April 2026

Jason Zoltak Jason Zoltak
12 minute read

Table of Contents

The next person you hire onto your security team might be operating three other jobs simultaneously from a laptop farm in Pyongyang. That's not hyperbole. Between state-sponsored IT workers, AI-generated video interviews, and synthetic identities assembled from breached data, security roles have become the most targeted function in applicant fraud. The access you provision on day one is exactly what makes these positions worth the effort to infiltrate.

TLDR:

  • Security teams are infiltrated through hiring because roles grant day-one access to networks and tools.
  • Fraudulent applicants rose 1,300% in 2024, with deepfakes and synthetic identities bypassing standard checks.
  • DPRK IT workers target remote security roles to extract threat intel and embed in company infrastructure.
  • Traditional background checks verify individual data points but miss synthetic identities engineered to pass.
  • Tofu screens every applicant across 40+ signals and catches deepfakes during live interviews in real time.

Why Security Teams Are Prime Targets for Hiring Fraud

Security teams hold the keys. Network access, zero-trust architecture, incident response protocols, vulnerability databases: the roles these teams fill carry more systemic access than almost any other function in a company. That makes them a priority target, not an afterthought.

The irony is hard to ignore. The people charged with stopping infiltration are being infiltrated through the very process they're not watching: hiring. Fraud rings and state-sponsored actors know this. And remote-first hiring has made it far easier to exploit.

Technical security roles are disproportionately remote, high-paying, and high-trust. That combination is exactly what fraudulent applicants optimize for. When the job involves handling sensitive infrastructure, the access granted on day one is already a breach waiting to happen.

The Scale of Applicant Fraud Targeting Technical Roles in 2026

The numbers are no longer theoretical. Companies reported an average loss of over $50,000 per fraudulent hire in 2025, with some cases exceeding $100,000 once you factor in project delays, legal fees, and reputational damage. By 2028, researchers predict one in four applicants will be fake.

For most departments, a fraudulent hire is an expensive HR problem. For a security team, it's a breach vector. The cost calculus is completely different when the person you hired has access to your SIEM, your endpoint controls, or your cloud infrastructure. A $50,000 average loss doesn't account for what a bad actor embedded in a security function can extract or destroy before anyone notices.

State-Sponsored IT Workers Infiltrating Security Positions

The DPRK IT worker problem has a scale most hiring teams still haven't internalized. Upwards of 100,000 North Korean operatives are spread across 40 countries, collectively generating approximately $500 million annually through fraudulent remote work. That revenue funds weapons programs. The FBI and OFAC have both issued public guidance on it. And yet applications keep getting through.

"These are not opportunistic fraudsters. They are organized, state-directed, and targeting roles with access to infrastructure."

Security positions are at the top of that target list. Think about what a DPRK operative gains by landing a remote cybersecurity role: threat intelligence, active detection tooling, internal runbooks, and in some cases direct access to the systems a company uses to defend itself. That's infiltration with a blueprint attached.

The roles these operatives pursue include engineering, DevOps, AI, and cybersecurity functions, because those functions carry privileged access and produce intelligence with real strategic value. A fraudulent hire in accounts payable is costly. A fraudulent hire on your security team is a different category of incident entirely.

What makes detection hard is how good the cover has become. Stolen identities, fabricated employment histories, and coached interview responses make surface-level screening nearly useless. Researchers have documented these network depths, and Nisos has tracked the insider threat patterns that follow a successful placement. Standard checks don't catch any of it.

How Deepfake Technology Bypasses Security Team Interview Processes

Deepfake fraud attempts rose 1,300% in 2024, according to Pindrop's 2025 Voice Intelligence Report. That number is worth sitting with. Not 30%. Not doubled. Thirteen hundred percent.

The techniques have outpaced most teams' ability to visually detect them. Real-time face-swapping overlays mask the operator's actual appearance. Voice synthesis replicates tone and cadence well enough to pass a phone screen. AI-generated responses feed through earpieces or secondary screens, letting the fraudster answer technical questions they couldn't answer themselves.

Security teams are uniquely exposed here. Their interview loops are rigorous, which creates longer video call sessions. More session time means more surface area for AI overlay tools to operate. And because candidates for these roles are expected to be methodical and precise, slow or deliberate responses don't trigger suspicion the way they might elsewhere. The fraud hides inside professional behavior.

Human eyes can't reliably catch this. Detecting it requires analyzing lip sync, eye movement, facial construction, and voice patterns in real time to flag what trained interviewers still miss.

Synthetic Identity Fraud in Security Team Applications

Synthetic identity fraud has no obvious victim — no police report gets filed, no one calls HR to say their identity was stolen. The fraud is silent by design.

A synthetic identity is assembled from real fragments: a legitimate SSN from a data breach, a real employer's name, a valid university, stitched together with fabricated contact details and manufactured digital footprints. The result is a candidate who resolves in every system a background check queries, but has never been a real person.

For security team roles, that silence is especially dangerous. Clearance considerations and infrastructure access activate fast. The window between onboarding and first access is short.

Why Background Checks Fall Short Here

Traditional background checks confirm that individual components resolve. They are not built to analyze whether the full identity coheres. Synthetic identities are engineered piece by piece to pass exactly those checks.

Detection requires cross-referencing data points to find inconsistencies in how an identity was assembled, not whether its parts verify individually. That means analyzing resume metadata, validating social account ownership, and checking whether a digital footprint reflects a real life lived over time. The signal lives in the seams. Finding it requires tools built for that specific problem.

Human review alone will not catch it. Neither will a standard background check vendor.

The Laptop Farm Infrastructure Behind Remote Security Hires

Getting hired is only the first step. What happens after the offer letter is where the real threat begins.

Security team, hiring, underground operation, cybersecurity theme, security teams

Fraudulent operators don't work from a single workstation. They run laptop farms: physical racks of company-issued devices controlled remotely through hardware like PiKVM, which gives an overseas operator full keyboard and display access to a machine sitting at a U.S. mail drop used purely for mail forwarding. One person can hold multiple jobs simultaneously, each on a separate device, each appearing to operate from a legitimate domestic location.

The networking layer compounds the problem. Tools like Tailscale create encrypted mesh networks that route traffic in ways that bypass standard perimeter monitoring. From your security stack's perspective, the device looks local and clean.

The specific danger for security teams is that the access those roles require feeds directly into that infrastructure. Credentials, VPN configs, internal tooling, and detection logic all flow through a machine your team provisioned and trusts. The perimeter defense your security team built assumes the insider is legitimate. Laptop farm operators know that assumption exists, and they build their entire operation around exploiting it.

Red Flags Security Teams Miss During Application Review

Most fraudulent applicants aren't caught because the signals aren't there. They're missed because no one knows where to look.

Here are the red flags that consistently appear across compromised applications, and that recruiters routinely explain away in isolation:

  • Resume file metadata showing creation dates that don't align with claimed employment timelines, a tell that the document was recently constructed to fit the job.
  • LinkedIn profiles with aged account creation dates but sparse activity history, or follower counts misaligned with stated career tenure.
  • VoIP phone numbers registered to services commonly associated with fraud rings.
  • IP locations that don't match the claimed location, or that resolve to known proxy infrastructure.
  • AI-generated application language with unnaturally consistent tone and phrasing that mirrors job description text verbatim.

The problem is that recruiters see these individually and explain them away. A thin LinkedIn seems normal. An IP mismatch gets written off as a VPN. Together, they tell a different story — and human review isn't built to triangulate across 40+ signals per applicant at volume. That's a structural mismatch between the scale of fraud and the tools most security hiring teams have available.

Why Traditional Background Checks Fail for Security Roles

Standard background checks answer one question: do the pieces resolve? Employment history, education, references. Each gets confirmed in isolation. The problem is that sophisticated fraud is engineered to pass exactly that check.

Fraud Vector

Traditional Background Check

Modern Fraud Detection (Tofu)

Synthetic Identity (assembled from breached data fragments)

Passes verification because individual components (SSN, employer, education) resolve in isolation. No cross-validation of identity coherence.

Flags identity through cross-referencing 40+ signals against 4+ billion data points. Detects when fragments don't cohere into a real lived history.

DPRK IT Worker (state-sponsored operative using stolen identity)

Verifies stolen credentials as legitimate. Cannot detect that identity is being operated by someone else from overseas location.

Catches location spoofing, VoIP number patterns, IP mismatches, and behavioral signals consistent with laptop farm operations before interview stage.

Deepfake Video Interview (AI-generated video overlay during live call)

No detection capability. Background checks occur before or after interview, not during the video interaction itself.

DeepDetect analyzes lip sync, eye movement, facial construction, and voice patterns in real time during interviews to flag AI manipulation as it happens.

Resume Metadata Manipulation (recently fabricated documents backdated to match timeline)

Does not check document metadata or creation timestamps. Only validates content claims against external records.

Analyzes resume file metadata including creation dates, modification history, and authorship data to detect recently constructed documents masquerading as historical records.

Fraud Ring Reference Network (coordinated references who vouch for fake candidates)

Calls provided references who confirm employment. Cannot detect when references are part of coordinated fraud operation.

Cross-references applicant data against proprietary Fraudbase of 5M+ analyzed profiles to identify patterns connecting candidates to known fraud networks.

Thin Digital Footprint (aged accounts with sparse activity inconsistent with claimed tenure)

May verify LinkedIn account exists but does not analyze activity patterns, follower alignment, or footprint consistency over time.

Validates social account ownership and analyzes whether digital footprint reflects real life lived over time, flagging aged accounts with suspicious activity gaps.

A stolen identity from a data breach resolves cleanly, a fabricated employer that existed a decade ago verifies without issue, and references drawn from the same fraud ring answer calls and play the part. The background check passes and the candidate advances.

Security roles demand a different standard. The access provisioned on day one (network controls, detection tooling, internal runbooks) does not scale with the depth of screening that preceded it. Most companies apply identical background check processes regardless of what a role actually touches.

The gap is not in execution. It is in architecture.

Where the Model Breaks Down

Traditional verification was built for a world where identity fraud required meaningful effort. Synthetic identities backed by years of built-up credit history and real breach data have changed that calculus entirely. Security teams hiring today are reviewing candidates whose paper trail was constructed to survive scrutiny, not candidates who happened to look good on paper.

Checking whether something is consistent is not the same as checking whether it is real.

How Tofu Protects Security Teams During Hiring

Tofu screens every applicant across 40+ signals the moment they hit your ATS, validating identity against 4+ billion data points and cross-referencing against a proprietary Fraudbase built from 5M+ analyzed profiles. Synthetic identities, DPRK IT workers, location spoofing, resume metadata anomalies: all flagged before a recruiter opens the application.

Interview-Layer Coverage

DeepDetect extends protection through the interview itself. Real-time analysis of lip sync, eye movement, facial construction, and voice patterns catches AI-generated video manipulation as it happens. A proxy swapper who cleared the application screen gets caught when they appear on the video call as a different person.

Direct Pipeline Access

For security teams that want raw signal access, the Fraud API delivers real-time risk scores via a single API call, with device and IP fingerprinting, cross-network consortium intelligence, and detailed fraud payload data feeding directly into your internal tooling.

If your security team is actively hiring for security roles and you want to see what the fraud picture looks like in your current pipeline, we're happy to walk through it.

Final Thoughts on Fraud Targeting Security Roles

Hiring security teams means handing over the keys before you know who's really holding them. The fraud infrastructure targeting these roles is organized, state-funded, and built to pass the checks you're running. Standard screening confirms identity components resolve but doesn't prove the person is real. Your security function can't defend what it doesn't see coming. If you're hiring for roles with privileged access, we can show you what's hiding. Waiting until after onboarding is too late.

FAQs

How can fraudulent applicants specifically target security team hiring processes?
Security roles offer privileged access to infrastructure, detection tools, and sensitive protocols from day one. Fraudulent applicants target these positions because successful placement grants access to the exact systems designed to stop infiltration, making security teams both high-value targets and operationally vulnerable during hiring.
What makes deepfake detection harder during security role interviews?
Security team interview loops are longer and more technical, creating extended video call sessions where AI overlay tools have more time to operate. Methodical, deliberate responses are expected in these roles, so the slow cadence that sometimes signals fraud gets mistaken for professional precision.
Why do standard background checks fail to catch synthetic identities?
Traditional background checks verify that individual components resolve — employment, education, references — but don't analyze whether the full identity coheres. Synthetic identities are engineered piece by piece using real data fragments (legitimate SSNs, valid universities, real employer names) specifically to pass these isolated verification checks.
How do DPRK IT workers maintain multiple security jobs simultaneously?
They operate laptop farms: physical racks of company-issued devices controlled remotely through hardware like PiKVM. One overseas operator manages multiple machines simultaneously, each sitting at a U.S. mail forwarding address, each appearing to work from a legitimate domestic location through encrypted mesh networks that bypass standard monitoring.
What applicant signals should security teams flag during resume review?
Resume metadata showing creation dates misaligned with employment timelines, LinkedIn profiles with aged accounts but sparse activity, VoIP numbers from fraud-associated services, IP addresses that don't match claimed locations, and AI-generated application language that mirrors job descriptions verbatim. In isolation these seem explainable — together they indicate fraud.
What is the estimated financial cost of a fraudulent hire on a security team?
Companies reported an average loss of over $50,000 per fraudulent hire in 2025, with some cases exceeding $100,000 when including project delays, legal fees, and reputational damage. For security teams specifically, these costs don't account for the potential damage from a bad actor with access to SIEM systems, endpoint controls, or cloud infrastructure before detection.
How much revenue do North Korean IT workers generate annually through fraudulent employment?
Upwards of 100,000 North Korean operatives spread across 40 countries collectively generate approximately $500 million annually through fraudulent remote work. This revenue directly funds weapons programs, making it a state-sponsored operation with strategic objectives beyond typical employment fraud.
What types of security roles are most frequently targeted by DPRK IT workers?
DPRK operatives specifically target engineering, DevOps, AI, and cybersecurity functions because these roles carry privileged access and produce intelligence with real strategic value. These positions provide access to threat intelligence, active detection tooling, internal runbooks, and the systems companies use to defend themselves.
What is Tailscale and why does it complicate fraud detection in laptop farm operations?
Tailscale is a tool that creates encrypted mesh networks, allowing traffic to be routed in ways that bypass standard perimeter monitoring. Fraudulent operators use it to make remotely-controlled laptop farm devices appear local and clean to a company's security stack, effectively hiding the overseas nature of the operation.
How much did deepfake fraud attempts increase in 2024?
Deepfake fraud attempts rose 1,300% in 2024 according to Pindrop's 2025 Voice Intelligence Report. This dramatic increase reflects the widespread adoption of real-time face-swapping overlays, voice synthesis, and AI-generated responses that allow fraudsters to pass technical interviews.
What is a mail forwarding address and how is it used in security team hiring fraud?
Mail forwarding addresses are U.S.-based locations where company-issued laptops are physically received and set up, while the actual operator controls them remotely from overseas. This setup allows fraudulent workers to appear domestically located while maintaining multiple simultaneous jobs from laptop farms in other countries.
What percentage of applicants are predicted to be fake by 2028?
Researchers predict that by 2028, one in four applicants will be fake. This projection reflects the rapidly advancing sophistication of synthetic identities, deepfake technology, and state-sponsored fraud operations specifically targeting remote technical roles.
Why is synthetic identity fraud considered a 'silent' crime?
Synthetic identity fraud has no obvious victim because it's assembled from real fragments like legitimate SSNs, real employer names, and valid universities combined with fabricated details. No police report gets filed and no one calls HR to report identity theft, allowing the fraud to operate undetected through standard verification processes.
What is PiKVM and how is it used in fraudulent remote security hiring?
PiKVM is hardware that gives an overseas operator full keyboard and display access to a company-issued machine sitting at a U.S. address. Fraudulent workers use it to remotely control laptop farm devices, allowing one person to maintain multiple security jobs simultaneously while appearing to work from legitimate domestic locations.
How does Tofu's DeepDetect technology work during video interviews?
DeepDetect performs real-time analysis of lip sync, eye movement, facial construction, and voice patterns during video interviews to catch AI-generated video manipulation as it happens. This catches proxy swappers and deepfake operators who may have cleared the initial application screening but appear as different people or use AI overlays during live interviews.

« Back to Blog