Announcing our $5M seed roundLearn more

Well-Done Tofu

Thoughtfully prepared ideas and releases from and by our team.

Fraud Score: Complete Guide for Businesses (April 2026)

Fraud Score: Complete Guide for Businesses (April 2026)


11 minute read

Table of Contents

You can pull a fraud score check online for any candidate in seconds. Drop in an IP, get a risk rating, see if they're routing through a proxy or a known bad network. That catches some fraud, but it misses the stuff that actually costs you. A generic IP fraud score won't flag resume metadata tied to a known fraud ring, won't cross-reference employment history against billions of records, and won't tell you if the person who applied is the same person showing up to the interview. Fraud scoring was built for payments. Recruiting fraud requires a completely different set of signals, and most companies are still using the wrong toolkit.

TLDR:

  • Fraud scores rate risk from 0-100 by analyzing 40+ signals like IP reputation and device fingerprinting.
  • Generic fraud tools miss recruiting-specific threats like resume metadata and social account ownership.
  • Remote hiring fraud grew 12% annually since 2020, targeting roles with inside access to sensitive data.
  • Tofu detects applicant fraud at application and interview deepfakes in real time across your hiring funnel.

What Is a Fraud Score?

A fraud score is a number. It's a risk score, typically ranging from 0 to 100, that reflects how likely a given user, transaction, or interaction is to be fraudulent. The higher the score, the higher the risk.

Behind that single number is a lot of machinery. Fraud scoring systems pull from dozens of signals simultaneously: IP reputation, device fingerprinting, email age, behavioral patterns, geolocation consistency, and more. Each signal gets weighted, combined, and run through a model that spits out one actionable output. That's the score.

The design logic is deliberate. Decision makers don't want raw data dumps. They want a clear answer fast. A fraud score turns what could be hundreds of data inputs into an actionable verdict: block a transaction, flag a user, or route something to manual review.

Fraud scores matter because they're scalable. A human analyst can review dozens of cases. A fraud scoring model can review millions in the same timeframe, with consistent logic applied every single time.

The catch is that scores are only as good as the signals feeding them. A model trained on one type of fraud won't catch another. That gap matters more than most businesses realize, and it's worth keeping in mind as fraud scoring gets applied to increasingly varied contexts.

How Fraud Scores Work

Data collection begins the moment a user, applicant, or transaction enters a system. IP data, device identifiers, browser configurations, behavioral timing, and session metadata are all pulled simultaneously, then fed into a scoring model trained on historical fraud patterns. Each signal gets weighted by its predictive value, then combined into a composite risk rating.

Signals interact. A residential IP paired with a spoofed device fingerprint reads very differently than either signal alone. Good fraud scoring accounts for those combinations, and the individual inputs that feed them.

Most frameworks follow a three-stage architecture:

  • Data collection: Passive capture of IP, device, behavior, and identity signals at the point of interaction
  • Signal analysis: Each input is scored against known fraud patterns and baseline expected behavior
  • Composite scoring: Weighted signals are aggregated into a final risk score, often with a confidence threshold that routes cases to automated action or manual review

The output looks simple. Getting there isn't.

IP Fraud Scores and Location Detection

Your IP tells a story before you say a word. Where you're connecting from, what network you're on, whether you're routing through a proxy or VPN. All of it surfaces the moment a system checks your IP against known threat databases.

An IP fraud score distills that signal into a 0–100 risk rating. Tools like IPQualityScore's IP fraud checker flag proxy connections, Tor exit nodes, and datacenter IPs that rarely belong to legitimate users. High scores don't always mean fraud, but they narrow the field fast.

For remote hiring, this matters acutely. A candidate claiming to be in Austin who's routing through a datacenter in Southeast Asia is worth a second look. IP analysis catches the location mismatch, but paired with device signals and behavioral data, location anomalies become one of the clearest early indicators that something is off.

Device Fingerprinting and Hardware Analysis

A technical illustration showing a digital device fingerprint concept - a stylized laptop or computer with abstract data points, configuration signatures, and connection lines emanating from it. Include visual elements representing browser attributes, hardware components, and network signals converging into a unique pattern or signature. Use a modern, tech-focused color palette with blues, purples, and cyans. The style should be clean, professional, and diagram-like.

IP checks tell you where someone is connecting from. Device fingerprinting tells you what they're connecting with, and whether that device has shown up before wearing a different mask.

Every device carries a configuration signature: browser version, operating system, screen resolution, installed fonts, timezone settings, hardware concurrency, and dozens of other attributes that combine into something close to a unique identifier. No single attribute is unique on its own. The combination almost always is.

Fingerprinting's real value is persistence. Unlike cookies, a device fingerprint survives browser resets, cleared cache, and incognito mode. A suspicious IP paired with a fingerprint that's appeared across dozens of flagged accounts is what separates a legitimate user on a VPN from an actual bad actor. Device analysis adds the second dimension that IP checks alone cannot provide.

For legitimate users, this process is invisible. No friction, no forms. Verification happens passively in the background, which is exactly how good fraud detection should work.

Common Fraud Score Use Cases Across Industries

Fraud scoring spans every industry that handles transactions, identities, or access decisions. The specific signals vary, but the underlying problem is the same: bad actors exploit systems at scale, and manual review can't keep up.

The numbers reflect how bad it's gotten. Ecommerce fraud hit $48 billion in 2025, chargebacks are up 41%, and the average merchant now absorbs $4.61 in losses for every dollar of actual fraud.

Ecommerce and Payments

Transaction monitoring is fraud scoring's home turf. Every order triggers a real-time check: Is this IP tied to past fraud? Does the billing info match the device location? Is purchase velocity unusual? A score above threshold gets blocked before the order ships.

Financial Services

Banks and fintechs apply fraud scores to account opening, login attempts, and wire transfers, with synthetic identity fraud as the primary target.

Ad Tech

Click fraud and invalid traffic cost advertisers billions annually. Fraud scores filter out bot traffic and datacenter IPs before spend gets wasted.

Hiring and Recruiting

Remote hiring created a new attack surface: fraudulent applicants, synthetic identities, and location spoofing. Standard fraud scoring tools weren't built to catch any of it.

Challenges and Limitations of Fraud Scoring

No fraud scoring system gets every call right. The two failure modes pull in opposite directions: false positives block legitimate users, and false negatives let actual fraud through. Tighten your threshold to catch more fraud and you'll frustrate real applicants. Loosen it and attackers slip through.

Data quality makes this worse. Outdated IP reputation lists and thin device profiles create blind spots that sophisticated fraud rings actively probe and exploit. They rotate infrastructure to stay under detection thresholds.

The deeper issue is context mismatch. A model trained on payment fraud carries the wrong assumptions into a recruiting context, producing wrong signals and wrong calls.

Free Fraud Score Tools vs Enterprise Solutions

Free tools like Scamalytics and IPQualityScore are genuinely useful starting points. Drop in an IP, get a risk score, and know within seconds whether you're dealing with a datacenter, a known proxy, or a Tor exit node.

The ceiling, though, is low. Free tiers check one signal at a time, return limited context, and rely on static threat databases that sophisticated fraud rings have already learned to evade. You get a score. You rarely get a reason.

Enterprise solutions close those gaps in three ways:

  • Real-time scoring across dozens of simultaneous signals beyond basic IP lookups
  • Cross-network intelligence that links bad actors flagged across separate customers
  • Contextual models trained on specific fraud types like payment fraud, account takeover, or recruiting fraud instead of one-size-fits-all logic

The decision to upgrade is usually triggered by a failure. A fraud ring slips through a free tool's threshold because they rotated IP ranges. An account gets compromised using a device that checked out fine in isolation. At that point, the cost of a false negative outweighs the cost of an enterprise subscription by a wide margin.

Capability

Generic Free Tools (Scamalytics, IPQualityScore)

Enterprise Recruiting Solutions (Tofu)

IP Reputation Analysis

Static database lookups for datacenter IPs, proxies, and Tor exit nodes with limited update frequency

Real-time IP analysis combined with behavioral context and cross-network intelligence from recruiting-specific threat feeds

Device Fingerprinting

Basic device identification with no persistence tracking across sessions or historical analysis

Advanced device fingerprinting tracking device reuse across multiple applications with consortium data linking patterns across companies

Identity Verification Signals

Email age and domain reputation checks only, no social account or employment history validation

Cross-reference against 4+ billion data points including resume metadata, social account ownership, employment history, and proprietary Fraudbase of 5M+ analyzed profiles

Recruiting-Specific Detection

No coverage for synthetic identities, resume fraud, proxy interviewers, or location spoofing in hiring context

Purpose-built models detecting DPRK IT workers, synthetic identities, proxy interviewer swapping, and resume metadata tied to fraud rings

Real-Time Video Analysis

Not supported - no deepfake or interview manipulation detection capabilities

Live monitoring during video interviews for AI-generated manipulation, identity consistency, and proxy swapping across Zoom, Teams, and Google Meet

Integration and Workflow

Manual lookup tools requiring one-off queries with results viewed outside existing systems

Native ATS integration across 90+ platforms with automatic flagging at application submission and no recruiter workflow disruption

The Growing Fraud Crisis in Remote Hiring

Remote work didn't just change where people work. It changed who can pretend to work there.

Hiring fraud has intensified by an estimated 12% annually since 2020, and recruiting remains one of the least protected attack surfaces in any organization. Standard background checks verify criminal history. They don't catch synthetic identities, location spoofing, or a candidate who hired a stand-in for their technical interview.

The threat types traditional screening misses entirely:

  • Synthetic identities built from real credentials and fabricated contact details
  • Location spoofing via VPN or proxy to hide a sanctioned country of origin
  • Proxy interviewer swapping, where someone else completes your interview rounds
  • DPRK IT worker infiltration targeting remote engineering roles

The security consequences are worse. A bad hire with inside access is an insider threat. In fintech, crypto, or any company handling sensitive customer data, that's not an HR problem. It's a security incident waiting to happen.

Generic tools weren't built for this. An IP check flags a VPN. It doesn't tell you whether the resume metadata traces back to a fraud ring, or whether the LinkedIn profile was created last month with stolen photos. That's the gap recruiting fraud exploits.

Why Generic Fraud Tools Fail for Recruiting

Payment fraud leaves a transaction trail. Recruiting fraud leaves a resume.

Generic tools check IPs and email age. Those signals matter, but they're table stakes. What they miss is everything specific to the hiring context: resume file metadata that ties a candidate to a known fraud ring, LinkedIn profiles created last month with borrowed photos, employment histories that look real until you cross-reference them against billions of data points.

An IP check will tell you a candidate is routing through a proxy. It won't tell you whether their GitHub belongs to them, whether their resume was generated in bulk, or whether that same device fingerprint applied to fourteen other open roles this week.

That's the gap. Recruiting fraud requires triangulating identity across signals that don't exist in payment flows: social account ownership, document metadata, behavioral consistency across application touchpoints, and consortium data from other companies seeing the same bad actors. Purpose-built detection means training models on hiring data, not repurposing fintech logic and hoping it transfers.

Fraud Score Detection for Applicants and Candidates

Screenshot 2026-04-10 at 4.02.07 PM.png

Recruiting fraud runs through every stage of the hiring funnel, which is why detection needs to run there too.

Tofu's FraudDetect screens every applicant across 40+ signals the moment they hit your ATS, validating identity against 4+ billion data points and a proprietary Fraudbase built from 5M+ analyzed profiles. Synthetic identities, DPRK IT workers, location spoofing, proxy interviewers get flagged before a recruiter opens a single resume.

DeepDetect extends that coverage into interviews. It monitors live video calls for AI-generated manipulation, tracks identity consistency across interview stages, and catches proxy swapping in real time across Zoom, Teams, and Google Meet.

The applicant should be the interviewee and the person you hire. If you're seeing suspicious patterns in your applicant flow, we're happy to share what we're learning.

Final Thoughts on IP and Device Fraud Scoring

Running a fraud score check online catches proxies but misses everything that happens after the connection. Resume metadata, social graph inconsistencies, interview swap patterns—those require models trained on recruiting fraud, not repurposed payment logic. We're analyzing millions of applicants and sharing what we're learning with teams who want to get ahead of this

FAQs

What's the difference between a fraud score for payments and a fraud score for recruiting?
Payment fraud scores detect transaction risk using card data, purchase velocity, and chargeback history. Recruiting fraud scores need to validate identity across social accounts, resume metadata, employment history, and device behavior — signals that don't exist in payment flows and require models trained on hiring data, not fintech logic.
How accurate are free IP fraud score checkers?
Free tools like Scamalytics and IPQualityScore check one signal at a time and rely on static threat databases that sophisticated fraud rings actively evade. They'll flag datacenter IPs and known proxies, but they miss context like whether a resume traces back to a fraud ring or whether social accounts actually belong to the applicant.
Can a fraud score detect deepfakes during video interviews?
Generic fraud scores can't. They weren't built for video analysis. Real-time deepfake detection requires monitoring lip sync accuracy, eye movement patterns, facial construction consistency, and voice analysis across interview stages — which is why recruiting-specific tools like Tofu's DeepDetect exist separately from standard fraud scoring systems.
Why do legitimate candidates sometimes get flagged by fraud scoring tools?
VPN usage, new email accounts, and residential proxies all trigger generic fraud models trained on payment fraud. Engineers applying to crypto or fintech companies often use VPNs, and flagging that as suspicious creates false positives that recruiting-specific models trained on hiring contexts know to filter out.
How long does it take to implement fraud scoring for applicant screening?
Most teams integrate fraud detection into their ATS in under two hours with tools that support 90+ platforms. Real-time screening starts immediately at application submission, with no recruiter workflow changes required — flagged candidates surface automatically before anyone opens a resume.
What signals does a fraud score typically analyze?
Fraud scoring systems analyze 40+ signals simultaneously including IP reputation, device fingerprinting, email age, behavioral patterns, geolocation consistency, browser configurations, hardware attributes, and session metadata. Each signal gets weighted by its predictive value and combined into a composite risk rating from 0 to 100.
How does device fingerprinting work?
Device fingerprinting creates a unique identifier by combining browser version, operating system, screen resolution, installed fonts, timezone settings, hardware concurrency, and dozens of other configuration attributes. Unlike cookies, this fingerprint survives browser resets, cleared cache, and incognito mode, making it useful for tracking suspicious devices across multiple flagged accounts.
What are the main types of recruiting fraud that standard fraud scores miss?
Standard fraud scores miss synthetic identities built from real credentials, location spoofing via VPN to hide sanctioned countries of origin, proxy interviewer swapping where someone else completes your interview rounds, and resume metadata tied to known fraud rings. These recruiting-specific threats require different signals than payment fraud detection.
How much has remote hiring fraud increased since 2020?
Hiring fraud has intensified by an estimated 12% annually since 2020, making recruiting one of the least protected attack surfaces in organizations. This growth coincided with the shift to remote work, which expanded who can pretend to work at a company.
What's the difference between false positives and false negatives in fraud scoring?
False positives block legitimate users by flagging them as fraudulent, while false negatives let actual fraud slip through undetected. Tightening detection thresholds catches more fraud but frustrates real applicants, while loosening them allows attackers to exploit the system.
Can fraud scoring systems work passively without adding friction for users?
Yes, for legitimate users the fraud detection process is invisible with no friction or additional forms required. Verification happens passively in the background by collecting IP address data, device identifiers, browser configurations, and behavioral timing the moment a user enters the system.
What are the main limitations of free fraud score tools?
Free tools check one signal at a time, return limited context, and rely on static threat databases that sophisticated fraud rings have learned to evade. They lack real-time scoring across multiple simultaneous signals, cross-network intelligence, and contextual models trained on specific fraud types beyond basic IP lookups.
How much did global ecommerce fraud losses reach in 2025?
Global ecommerce fraud losses hit $48 billion in 2025, with chargebacks up 41% and the average merchant absorbing $4.61 in losses for every dollar of actual fraud. These numbers reflect the severity of fraud across industries that handle transactions and access decisions.
What is DPRK IT worker infiltration?
DPRK IT worker infiltration involves North Korean operatives targeting remote engineering roles using synthetic identities and location spoofing to gain inside access to companies. This represents a security threat rather than just an HR problem, especially for fintech, crypto, and companies handling sensitive customer data.
What kind of data does Tofu's FraudDetect analyze?
Tofu's FraudDetect screens applicants across 40+ signals including IP address, device fingerprinting, social account ownership, resume metadata, and employment history, validating identity against 4+ billion data points and a proprietary Fraudbase built from 5 million+ analyzed profiles to flag synthetic identities, location spoofing, and known fraud patterns.

« Back to Blog