Back to blog
    AI Model Access Control in Regulated Industries: Who Gets to Query What
    ai-governanceaccess-controlregulated-industriescomplianceenterprise-ai

    AI Model Access Control in Regulated Industries: Who Gets to Query What

    Not everyone in your organization should have the same access to the same AI models. Here's how to design role-based access control for AI systems in healthcare, legal, and financial environments.

    EErtas Team·

    Most enterprises have mature role-based access control for applications and databases. The payroll system knows that HR can view salary data and managers can view their direct reports. The legal DMS knows that only attorneys assigned to a matter can access its documents.

    AI models, in most organizations, have none of that. Anyone with an API key can send anything to the model and receive a response. There are no roles, no restrictions on what data can appear in queries, and often no log of who queried what.

    In regulated industries, this is a compliance problem. In some cases, it's a legal one.

    Why AI Access Control Is Different from Data Access Control

    Traditional access control governs who can read or write data. AI access control governs something more complex: who can query which model, with which data, and what they can do with the output.

    When a nurse searches a patient record, you control what records they can access. When that same nurse queries an AI with a patient's details to get a clinical recommendation, three things happen simultaneously: data flows to the AI system, the AI processes it, and an output flows back. Each stage has distinct governance implications.

    The access control problem for AI has four dimensions that traditional RBAC doesn't fully address:

    Model access: which users and roles can query which models. A trainee clinician shouldn't have access to the same AI capabilities as an attending physician. An associate attorney shouldn't query AI with partner-level client matters without supervision. A junior analyst shouldn't be able to submit financial models for AI-assisted regulatory analysis without review.

    Data access in queries: what data can be included in AI prompts. Even if a user has read access to a document, including it in an AI query creates new risks — the document now travels to the AI system, potentially outside your security perimeter, and the AI's response may synthesize and expose information in ways the original document doesn't.

    Output access: who can see AI outputs. In healthcare, AI-generated diagnostic suggestions may have HIPAA implications if shared broadly. In legal, AI-generated analysis of a privileged matter may itself be privileged — sharing it broadly could waive that privilege. In finance, AI-generated trading signals may be material non-public information depending on context.

    Action access: what downstream actions can be triggered by AI output. The riskiest access control failure isn't a user seeing AI output they shouldn't — it's a user or automated system taking an action based on that output without appropriate authorization. Approving a loan, scheduling a procedure, filing a regulatory document — the action access layer is where AI governance failures become consequential.

    Industry-Specific Requirements

    Healthcare: Minimum Necessary and HIPAA

    The HIPAA minimum necessary standard (45 CFR §164.514) requires covered entities to make reasonable efforts to limit access to PHI to the minimum necessary to accomplish the intended purpose.

    Applied to AI: a nurse querying an AI about a patient's care should only be able to query about patients under their direct care. A billing coder should be able to query AI about coding questions but not about clinical details unrelated to billing. The AI system's access controls should enforce these role distinctions, not just rely on individual compliance.

    HIPAA's audit control requirements (45 CFR §164.312(b)) require covered entities to implement mechanisms that record and examine activity in systems containing PHI. AI interaction logs — who queried what, when, with what data — must be part of the PHI access audit trail.

    The practical implication: your AI system needs to know who the user is (authentication), what their role authorizes them to do (authorization), and log every query with that context (audit). This is standard for clinical applications. It's rarely implemented for AI.

    Attorney-client privilege creates specific AI access control requirements. AI systems used in legal work should enforce matter-level access controls consistent with the firm's document management system. Only attorneys and supervised staff assigned to a matter should be able to query AI with that matter's documents.

    Beyond matter access, there's a supervisory layer. Under ABA Model Rules 5.1 and 5.3, supervising attorneys are responsible for the work of associates and non-lawyers under their supervision. An AI governance framework should implement this: junior staff queries AI for legal research or drafting; the output is flagged for supervising attorney review before it's used in a work product.

    Query logging for legal AI needs to capture enough context for a supervising attorney to review what the AI was asked and what it said — not just that a query occurred.

    Finance: Separation of Duties

    SR 11-7's effective challenge requirement means the team that builds and uses a model should not be the team that validates it. Access controls are the mechanism for enforcing this separation.

    Model developers should be able to query the model in a development/staging environment. They should not have unrestricted access to production model queries on live customer data. Validators should have read access to model behavior logs but should be isolated from the development team's configurations to ensure independent assessment.

    For AI used in client-facing decisions — credit, insurance, investment recommendations — query-level access controls should prevent staff from using AI to process data they don't have business authorization to process, even if they technically have system access.

    The Shadow AI Problem

    The most common objection to AI access controls is that they're too restrictive — staff will just use personal AI accounts if the enterprise system is too locked down.

    This is true. And it's an argument for getting access controls right, not for abandoning them.

    When staff use personal AI accounts for work, you get: data egress to uncontrolled systems, no audit trail, potential HIPAA/GDPR violations (for healthcare/EU operations), privilege risk (for legal), and no institutional record of AI-assisted work product. That's a worse outcome than a somewhat restricted enterprise system.

    The goal of enterprise AI access controls isn't to prevent all AI use. It's to channel AI use through systems where it can be logged, governed, and audited — while preventing the highest-risk patterns (PHI in personal accounts, privileged documents in consumer AI, financial data in uncontrolled systems).

    Practical Implementation

    Model-level controls: deploy different models for different roles. A clinical decision support model shouldn't be accessible to the billing team. A privileged document analysis model shouldn't be accessible to staff without matter access. Locally-run fine-tuned models (deployed via Ollama or llama.cpp on your network) can be scoped to specific users and roles through your existing network access controls — no queries leave your perimeter.

    Query-level DLP: data loss prevention rules that scan prompts for patterns that shouldn't appear in AI queries — PHI identifiers in queries from non-clinical staff, client matter numbers in queries from unauthorized staff, PII patterns in queries from systems without appropriate authorization.

    Output-level visibility: role-based visibility on AI outputs with audit trail. Some outputs should require supervisor review before use. All outputs should be logged with the user identity and timestamp.

    SSO integration: AI access should be managed through your enterprise SSO, not through standalone API keys shared within teams. API keys create accountability gaps — you can't trace a query to an individual user from a shared key.

    Audit logging: every AI query should generate an audit log entry: user identity, role, timestamp, data types present in the query (without logging the full query if it contains PHI), model version queried, and whether the output was used in a downstream action.

    The On-Premise Advantage

    Cloud AI APIs create a structural access control challenge: the query leaves your perimeter before your access controls can fully evaluate it. DLP rules can scan the prompt before it's sent, but the enforcement point is at the edge of your network, not inside the AI system itself.

    Locally-run fine-tuned models change this. When the model runs on your infrastructure, your existing network access controls, authentication systems, and audit logging apply to the AI the same way they apply to any other system on your network. There's no third-party API to route around your controls.

    Book a discovery call with Ertas →

    Ertas Data Suite runs entirely on your infrastructure — no data egress, no third-party API calls during inference. Your existing access control infrastructure applies. Every processing step is logged with operator ID and timestamp. For regulated industries where AI access control is a compliance requirement, the on-premise architecture is the architecturally correct foundation.

    For the fine-tuning step: train your domain models on cloud GPUs with your own data, export to GGUF, then deploy locally behind your access controls. Cloud for training, on-premise for inference.

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading