Skip to main content
Diligent’s AI Remediation Agent automatically analyzes name screening alerts and hits with AI-powered entity resolution, providing evidence-based FALSE_POSITIVE or TRUE_POSITIVE determinations.

Overview

The remediation agent processes alerts created from your searches:
  1. Monitor for new alerts - Automatically detects alerts from completed searches or polls provider
  2. Gather intelligence - Enriches alerts with registry data (company records, officer information)
  3. Analyze matches - Uses AI to compare subject profiles against hit profiles with evidence-based reasoning
  4. Post resolutions - Automatically posts FALSE_POSITIVE or TRUE_POSITIVE determinations to your provider
  5. Track sync status - Updates alert remediation_status to reflect sync progress
All processing happens automatically in the background. No API calls required!

Remediation Workflow

The remediation status tracks the alert through its lifecycle:
  1. UNREMEDIATED - Alert created, awaiting AI analysis
  2. PENDING_SYNC - AI has made resolutions, waiting to sync to provider
  3. REMEDIATED_SYNCED - Resolutions successfully synced to provider
  4. REMEDIATED_UNSYNCED - Resolutions made but couldn’t sync (manual intervention may be needed)

How It Works

1. Alert Detection

The system continuously monitors your screening provider for new alerts. When a new case appears:
  • Alert is automatically ingested with full profile data
  • All hits (potential matches) are retrieved
  • Related articles and media are captured

2. AI Analysis & Enrichment

The AI engine enriches each alert with additional intelligence and performs deep analysis:
  • Gathers supporting data from public registries and databases
  • Compares the subject against each potential match using multiple data points
  • Identifies supporting and contradicting evidence
  • Generates a confidence score and human-readable explanation
  • Determines whether the match is a FALSE_POSITIVE or TRUE_POSITIVE
All of this happens automatically within minutes of a new alert appearing in your provider.

3. Resolution Posting

For each hit, the system:
  • Posts a comment and a resolution (FALSE_POSITIVE or TRUE_POSITIVE) to provider if so configured
  • Includes summary explaining the determination
  • Preserves any existing manual comments from your team
  • Adds “Diligent AI: ” prefix for tracking

How the AI Makes Decisions

The AI evaluates each potential match by comparing multiple data points:
  • Identity information: Names, dates of birth, incorporation dates
  • Location data: Addresses, countries of residence or operation
  • Identifiers: Company registration numbers, tax IDs, passport numbers
  • Contextual data: Industry codes, related entities, historical records
Each piece of evidence is evaluated for strength (strong, moderate, or weak) and whether it supports or contradicts the match. The AI then generates an overall confidence score and a clear explanation of its reasoning. Key principle: Any strong contradicting evidence (such as a temporal impossibility) results in an immediate FALSE_POSITIVE determination, regardless of other supporting factors.

Example Resolution

Subject: Sarah Mitchell, DOB: 1988-03-15, UK Hit: Sarah MITCHELL, INACTIVE PEP, spouse of political figure (born 1965) AI Determination: FALSE_POSITIVE (Score: 0/100) Reasoning: “Temporal impossibility: Sarah Mitchell was born in 1988, but the hit profile indicates marriage to a political figure in 1992—when the subject would have been 4 years old. Additional evidence from public records shows the hit individual must have been born before 1965 to marry in 1992. These are different individuals with the same name.” Evidence:
  • ✅ MATCH (STRONG): Exact name match - “Sarah Mitchell”
  • ✅ MATCH (MODERATE): Nationality - United Kingdom
  • ❌ MISMATCH (STRONG): Temporal impossibility - marriage date incompatible with DOB
  • ❌ MISMATCH (STRONG): Age discrepancy - 20+ year difference inferred

Provider Integration

The remediation agent works seamlessly with:

WorldCheck (LSEG World-Check One)

  • Monitors cases via date-based search
  • Posts resolutions using WorldCheck resolution toolkit
  • Maps AI determinations to configured risk/reason taxonomy
  • Preserves existing remarks when updating

ComplyAdvantage CSOM

  • Monitors searches via API pagination
  • Posts resolutions as entity comments
  • Prefixes AI comments with “Diligent AI: ” for tracking
  • Supports both comment-only and full status updates

LexisNexis Bridger

  • Monitors records via predefined search queries
  • Posts resolutions to case assignment system
  • Updates case status and adds remarks
  • Maintains assignment role/division configuration

Configuration

Contact us at [email protected] to enable AI remediation for your account. To get started, please provide the following information for your screening provider:

WorldCheck (LSEG World-Check One)

  • API Key: Your WorldCheck API key
  • API Secret: Your WorldCheck API secret
  • Account ID: Your WorldCheck account identifier
  • Group ID: The group ID to use when creating new cases (determines screening scope and available fields)
  • Risk Label: Default risk category for resolutions (e.g., "LOW", "MEDIUM", "HIGH", "UNKNOWN")
  • Reason Label: Default reason category for resolutions (e.g., "No Match", "Full Match", "Partial Match")

ComplyAdvantage CSOM

  • API Key: Your ComplyAdvantage API key
  • Search Profile ID (optional): Default search profile to use for new searches
  • Region: API region - EU (api.eu.complyadvantage.com) or US (api.us.complyadvantage.com)

LexisNexis Bridger

  • API Key: Your LexisNexis Bridger API key
  • Username: Account username in format client_id/user_id
  • Password: Account password
  • Predefined Search Name: Name of your predefined search configuration
  • Assignment Role (optional): Default role to assign cases to
  • Assignment Division (optional): Default division to assign cases to

AI Behavior Configuration

The remediation agent’s decision-making can be customized based on your risk appetite and compliance requirements. Configuration options include: Hit Category Rules
  • Define different behavior for different alert types (SANCTION, PEP, WATCHLIST, MEDIA)
  • For example: conservative rules for sanctions (comment-only analysis) vs. permissive rules for adverse media (auto-resolve clear false positives)
Evidence Thresholds
  • Set minimum evidence strength required for auto-resolution (weak, moderate, or strong)
  • Configure separate thresholds for name evidence vs. additional evidence (DOB, location, identifiers, etc.)
  • Use multiple rule segments per category for nuanced decision-making
Resolution Actions
  • Auto-resolve: AI posts definitive resolution status to provider (e.g., mark as FALSE_POSITIVE)
  • Comment-only: AI adds analysis as a comment without changing hit status (human reviews and decides)
  • Configure different actions based on evidence confidence
Example Configurations:
  • Conservative (sanctions): Comment-only for all cases, even with strong contradicting evidence
  • Balanced (PEP): Auto-resolve slam-dunk false positives, comment on edge cases
  • Permissive (media): Auto-resolve both clear false positives and clear true positives
Contact [email protected] to discuss your preferred configuration strategy.

Support

For questions, configuration assistance, or to get started with AI remediation: Contact us at [email protected]