Secure AI for Claims Processing, Underwriting, and Compliance

    Ertas gives insurance companies on-premise data preparation for sensitive policyholder data and visual fine-tuning for domain-specific models — accelerating claims processing and underwriting without compromising data security or regulatory compliance.

    The Challenges You Face

    Claims Processing Is Document-Heavy and Slow

    Every claim involves medical records, police reports, repair estimates, correspondence, and policy documents. Manually extracting relevant information, verifying coverage, and routing claims consumes enormous adjuster time — and delays frustrate policyholders.

    Policyholder Data Is Heavily Regulated

    State insurance regulations, GLBA, and HIPAA (for health claims) impose strict requirements on how policyholder information is processed and stored. Cloud AI services that require data transmission to external servers create compliance complexities that slow adoption.

    Underwriting Models Need Institutional Knowledge

    Effective underwriting depends on decades of institutional knowledge about risk factors, loss patterns, and pricing strategies. Generic AI models lack this domain expertise, and the proprietary nature of underwriting data makes external training partnerships risky.

    Fraud Detection Requires Nuanced Pattern Recognition

    Insurance fraud comes in many forms — staged accidents, inflated claims, provider billing schemes — each with subtle indicators that differ by line of business. Generic anomaly detection misses the domain-specific patterns that experienced adjusters recognize.

    How Ertas Solves This

    Ertas Data Suite processes sensitive policyholder data entirely on-premise. The native desktop application requires no network connectivity, so medical records, financial information, and claims documents never leave your data center. The five-module pipeline — Ingest, Clean, Label, Augment, Export — transforms unstructured claims data into clean, labeled training datasets with a complete audit trail.

    Ertas Studio then fine-tunes models on these prepared datasets, producing AI that understands your specific policy language, claims patterns, and underwriting criteria. Exported GGUF models run on your own infrastructure, keeping policyholder data within your security perimeter during both data preparation and inference.

    For insurance companies, this means AI-accelerated claims processing and underwriting that satisfies regulatory requirements, leverages your institutional knowledge, and scales without proportionally scaling your workforce.

    Key Features for Insurance Companies

    Data Suite

    On-Premise Claims Data Processing

    Data Suite ingests claims documents — PDFs, scanned forms, medical records, repair estimates — and normalizes them into structured training data without any information leaving your network. Air-gapped operation satisfies the most stringent data handling requirements.

    Data Suite

    Adjuster-Guided Labeling

    Experienced adjusters and underwriters label claims data using the Label module, encoding their domain expertise into training datasets. Their knowledge of coverage nuances, red flags, and risk factors becomes embedded in the AI model.

    Studio

    Policy-Specific Model Training

    Fine-tune models on your actual policy language, coverage structures, and claims outcomes. The resulting model understands your products specifically — not insurance in general — producing more accurate coverage determinations and claims routing.

    Vault

    Regulatory-Grade Audit Trail

    Every data transformation, labeling decision, and training run is logged with immutable timestamps and user attribution. Export audit records for regulatory examinations, market conduct reviews, and internal compliance documentation.

    Why It Works

    • Insurance companies using Data Suite for claims data preparation have maintained full compliance with state insurance data security regulations while building AI training datasets — eliminating the multi-month vendor risk assessment required for cloud-based alternatives.
    • Claims triage models fine-tuned on institutional data route claims to the correct handling team with significantly higher accuracy than rules-based systems, reducing re-routing delays.
    • Adjuster-labeled training data captures domain expertise that would otherwise require years of on-the-job experience, helping new adjusters reach competency faster through AI-assisted workflows.
    • Self-hosted GGUF models process claims documents within the insurer's data center, satisfying GLBA Safeguards Rule requirements without architectural changes to the existing security infrastructure.
    • The audit trail provides the documentation needed for state insurance department market conduct examinations regarding the use of AI in claims handling and underwriting decisions.

    Example Workflow

    A property and casualty insurer wants to build an AI model that extracts key data from first notice of loss (FNOL) reports and recommends initial reserves. A claims operations analyst opens Ertas Data Suite on a secure workstation, ingests 15,000 historical FNOL reports through the Ingest module, and runs the Clean module to normalize the varied formats.

    Senior adjusters use the Label module to annotate each report with loss type, severity indicators, coverage applicability, and initial reserve recommendations. The Augment module generates variations of underrepresented loss types to ensure balanced training coverage. A versioned dataset is exported with complete audit metadata.

    The insurer's IT team uploads the prepared dataset to Ertas Studio, fine-tunes a model, and deploys the exported GGUF on internal servers. The model assists intake adjusters by extracting key FNOL data and suggesting initial reserves, reducing average processing time while providing the audit trail regulators require.

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.