What we build
A fraud stack that sees what single-claim scoring misses.
Every capability is a production layer — documents, images, graph, precedent — working together on the same claim in the same second, so nothing escapes because two signals were siloed.
Document-pattern anomaly detection
Invoice tampering, date inconsistencies, provider-signature forgery, and copy-paste narrative patterns flagged across every FNOL, police report, and invoice before the claim advances.
Image-tampering forensics
EXIF metadata cross-checked against device + time + location. Pixel-level computer vision catches splicing, re-saves, and synthetic damage in every scene photo.
Claimant + repairer network graphs
Entities resolved across claims history into a live graph. Centrality, clustering, and motif matching surface collusion rings invisible to per-claim scoring.
Similarity search over 4M+ claims
Vector similarity finds every historical claim that looks like today's — same narrative, same repairer, same staged pattern — so precedent becomes evidence instantly.
Soft-fraud vs hard-fraud scoring
Exaggeration, padding, and opportunism scored separately from organised ring fraud — so treatment (adjust) and escalation (block + SAR) match the risk class.
SIU case management + SAR filing
Every flagged claim arrives in the special-investigation queue with evidence pack, draft narrative, and SAR-ready exports — so investigators spend time on decisions, not assembly.
Where the stack catches fraud
Every line of business. Every fraud class.
Same document, image, and graph layers — tuned per line of business. Shared entity resolution, per-LoB models, cross-line ring detection. The patterns below all run on the same stack; only the models, thresholds, and integrations change.
Auto · staged accidents
Motor fraud rings, collusive repairers, phantom passengers, and cash-for-crash schemes — caught by graph centrality and similarity to known historic rings.
Health · invoice inflation
Overlapping treatment dates, phantom visits, upcoded procedures, and provider-watchlist hits — surfaced by doc-pattern models and provider anomaly scoring.
Property · staged loss
Claim amounts inconsistent with weather events, pre-loss imagery mismatches, and suspicious contractor networks flagged before the inspector is dispatched.
Specialty · commercial lines
Cargo, marine, and commercial-property claims scored against historical ring precedent and counterparty network — higher-value, higher-asymmetry decisions.
Organised ring detection
Cross-policy, cross-LoB entity resolution — the same ring operating across motor, health, and property surfaces as a single network, not three isolated cases.
Soft-fraud · exaggeration
Opportunistic padding of legitimate claims — detected, sized, and adjusted without blowing up the customer relationship or dragging the SIU in unnecessarily.
Model families we deploy
No single model catches every fraud. So we ensemble.
Each model family targets a distinct fraud surface — documents, images, networks, precedent. Blending their outputs is what separates catching a staged ring from catching a single inflated invoice.
Gradient-boosted + transformer ensemble over invoice structure, narrative style, date logic, and provider metadata. Tuned to the tampering signatures the team has seen before.
Dual-pipeline — EXIF + device-attribution check alongside error-level-analysis and noise-residual CV. Catches re-saves, splicing, and AI-generated imagery.
Graph-neural-network over the claimant-repairer-provider network — surfaces ring-like motifs, shared-address clusters, and centrality spikes before the 4th claim lands.
Dense embeddings over narrative + metadata + graph neighbourhood. FAISS retrieves the closest historical claims in milliseconds — precedent becomes part of the decision.
Data sources wired into every model
Every signal that moves the verdict — integrated.
Pulled in parallel, normalised into a single case schema, versioned alongside the models that consume them.
Explainability, not just flags
A flag alone doesn't pass SIU review. An evidence pack does.
Every block, investigate, or clean-through is accompanied by top-feature reasoning, document + image + graph evidence artefacts, model-version provenance, and a draft SAR narrative — generated at decision time, indexed for audit.
- Top-feature contributions per flag (document + image + graph)
- Full evidence pack with artefacts and historical matches
- Draft SAR narrative and regulator-template export
- Aligned to MAS, NAIC, FATF, Coalition AIF
Frameworks we align to
Why Axccelerate for claims fraud
Not a rules engine.
A detection stack.
A rules engine fires on what you already knew. Our stack surfaces patterns hidden across documents, images, networks, and precedent — the fraud your SIU team hasn't seen yet.
Pricing
Priced to the LoB + volume, not per flag.
Fraud deployments are custom — we scope against your lines of business, claim volume, and SIU tooling before quoting.
Glossary
The vocabulary behind every fraud verdict.
A quick reference for the terms that show up in claims-fraud detection — the language your SIU, compliance, and regulator all work in.
- Soft fraud
- Opportunistic exaggeration
Padding of an otherwise legitimate claim — inflated invoice, added damages, stretched narrative. Typically handled by adjustment rather than outright denial.
- Hard fraud
- Organised / intentional
A deliberately fabricated or staged claim — often coordinated across multiple claimants and providers. Targeted for blocking and regulatory reporting, not negotiation.
- Staged accident
- Engineered motor-fraud event
An incident orchestrated to generate a claim — often involving scripted collisions, phantom passengers, and collusive repairers feeding inflated invoices.
- Bust-out
- Premium-then-claim pattern
Policyholder pays premium briefly, then stages a large claim early in the cycle — a pattern detectable through tenure + claim-size + graph features.
- Inflated claim
- Artificially increased payout
Legitimate incident with damage or medical billings padded beyond actual loss. The canonical soft-fraud signature that document-pattern models target.
- Collusion network
- Linked-party conspiracy
Claimants, repairers, clinics, and sometimes adjusters acting together. Network-graph analytics are designed to see these as clusters, not isolated cases.
- SIU
- Special Investigation Unit
The carrier's internal fraud-investigation team. SIUs consume the flagged-claim queue, run interviews, and decide on block, adjust, or SAR filing.
- SAR
- Suspicious Activity Report
A regulatory filing raised when fraud or money-laundering is suspected. Required in many jurisdictions under FATF-aligned AML rules, and central to carrier compliance.
- EXIF metadata
- Exchangeable image file data
The metadata every digital photo carries — device, timestamp, geo. Cross-checking EXIF against the claim narrative is a first-line image-authenticity test.
- Image tampering
- Pixel-level manipulation
Any edit to a photo after capture — splicing, cloning, re-save, or AI generation. Detected via pixel-noise residual, error-level analysis, and device attribution.
- Pixel forensics
- Compute vision for authenticity
Techniques that surface manipulation invisible to the human eye — error-level analysis, noise residuals, and CFA-pattern checks on every claim image.
- Champion-challenger
- Parallel model operation
Running a new model (challenger) alongside the live model (champion) on real traffic to validate lift before promotion. Standard in fraud-model deployment.
- False-positive rate
- Incorrect flags / total flags
The proportion of flagged claims that turn out to be legitimate. Core trade-off: lower FPR means fewer customer frictions but more missed fraud.
- Chargeback cycle
- Post-payout dispute window
The period after a claim is paid during which the carrier can still recoup funds on discovered fraud. Shrinking this cycle is a key ROI driver for pre-payout detection.
Your fraud stack, engineered.
30-minute scoping with a senior engineer and an SIU-systems operator. You'll leave with a model plan, integration sketch, and realistic timeline — not a sales pitch.