Back to all articles
Security

Insider Threat Detection for SaaS: Monitoring the Risks You Trust the Most

Insider threats cause 60% of data breaches but remain the hardest to detect. Learn the behavioral patterns, LiteSOC event signatures, and detection rules that surface insider activity before damage is done.

Amirol AhmadAmirol Ahmad
April 29, 2026
9 min read
Share on X
Insider Threat Detection for SaaS: Monitoring the Risks You Trust the Most

External attackers are noisy. They probe, they fail, they try from new IPs. Your alerts fire. Someone investigates.

Insider threats are quiet. They use legitimate credentials, access systems they're authorized for, and operate within normal business hours. Your alerts don't fire. Nobody investigates. And by the time you notice, the damage is done.

Insider threats are responsible for roughly 60% of data breaches according to industry research — yet most SaaS security stacks are built almost exclusively for perimeter defense. This guide covers the behavioral signals that distinguish legitimate insider activity from malicious or negligent insider behavior, and how to instrument your application to catch them.

What Counts as an Insider Threat?

Not all insiders are malicious. The threat model has three distinct categories:

  • Malicious insiders — Employees, contractors, or partners who deliberately exfiltrate data, sabotage systems, or sell access. Often motivated by financial gain, resentment, or external coercion.
  • Negligent insiders — Users who inadvertently expose data through misconfiguration, accidental sharing, or ignoring security policies. No malicious intent, but equally damaging.
  • Compromised insiders — Legitimate accounts taken over by external attackers. The account behaves like an insider (valid credentials, known device) but is under adversary control.

Each category has distinct behavioral signatures, but they share one thing: they all appear in your event log.

Why SaaS Applications Are Particularly Exposed

SaaS applications are high-value targets for insider threats because:

  1. Data is centralized — Customer records, financial data, and PII are in one place with an accessible API.
  2. Access is broad — Sales reps, support agents, and engineers routinely access sensitive data as part of their job.
  3. Offboarding is inconsistent — Access is often not revoked promptly when someone leaves or changes roles.
  4. Audit trails are thin — Most SaaS apps log errors but not the full breadth of data access activity.

The fix for the last point is deliberate instrumentation of your data.* events.

The Behavioral Patterns to Monitor

1. Bulk Data Export

The clearest indicator of data exfiltration is volume. A normal user might export a few records at a time. An insider preparing to leave — or selling access — typically exports at scale.

Event signature:

data.export — 1,000+ records in a single operation
data.export — repeated within a short window (3+ times in 1 hour)

Detection rule:

const EXPORT_VOLUME_THRESHOLD = 1000;
const REPEATED_EXPORT_THRESHOLD = 3;
const REPEATED_EXPORT_WINDOW = 3600; // 1 hour

async function detectBulkExport(event: SecurityEvent) {
  if (event.event_name !== 'data.export') return;

  const recordCount = event.metadata?.record_count ?? 0;

  // Single large export
  if (recordCount >= EXPORT_VOLUME_THRESHOLD) {
    await createAlert({
      type: 'bulk_data_export',
      severity: 'high',
      user_id: event.user_id,
      org_id: event.org_id,
      metadata: { record_count: recordCount },
    });
    return;
  }

  // Repeated exports within window
  const key = `export_freq:${event.org_id}:${event.user_id}`;
  const count = await redis.incr(key);
  if (count === 1) await redis.expire(key, REPEATED_EXPORT_WINDOW);

  if (count >= REPEATED_EXPORT_THRESHOLD) {
    await createAlert({
      type: 'repeated_data_export',
      severity: 'medium',
      user_id: event.user_id,
      org_id: event.org_id,
      metadata: { export_count: count, window_seconds: REPEATED_EXPORT_WINDOW },
    });
  }
}

Track record_count in your data.export metadata. Without this field, bulk export detection is impossible.

2. After-Hours Data Access

Legitimate work happens at unusual hours — your engineers are distributed, support teams cover time zones. But after-hours data access by employees who normally work 9–5 in a single office is suspicious, especially when combined with other signals.

Event signature:

data.accessed — outside 07:00–20:00 in the actor's established timezone
data.downloaded — at 02:00 AM by an account with no prior late-night activity

Detection rule:

function isAfterHours(timestamp: Date, timezoneOffset: number): boolean {
  const localHour = (timestamp.getUTCHours() + timezoneOffset + 24) % 24;
  return localHour < 7 || localHour >= 20;
}

async function detectAfterHoursAccess(event: SecurityEvent) {
  const sensitiveEvents = ['data.export', 'data.downloaded', 'data.accessed'];
  if (!sensitiveEvents.includes(event.event_name)) return;

  const user = await getUser(event.user_id, event.org_id);
  const eventTime = new Date(event.created_at);

  if (isAfterHours(eventTime, user.timezone_offset)) {
    // Only alert if this is unusual for this user
    const hasHistoricalAfterHoursActivity = await checkBaseline(
      event.user_id,
      'after_hours_access',
      30 // days
    );

    if (!hasHistoricalAfterHoursActivity) {
      await createAlert({
        type: 'unusual_after_hours_access',
        severity: 'medium',
        user_id: event.user_id,
        org_id: event.org_id,
        metadata: {
          event_name: event.event_name,
          local_hour: (eventTime.getUTCHours() + user.timezone_offset + 24) % 24,
        },
      });
    }
  }
}

Baseline checks prevent alert fatigue. If a user routinely works at 11 PM, that's not anomalous for them. Flag deviations from the individual's 30-day baseline, not deviations from a global threshold.

3. Accessing Records Outside Normal Scope

Support agents access customer records. That's normal. A support agent accessing 200 different customer records in a single afternoon — especially customers they've never interacted with before — is not.

Event signature:

data.accessed — 50+ distinct customer records in 1 hour by a single user
data.accessed — accessing records belonging to a competitor (if metadata includes company name)

Detection rule:

const SCOPE_BREACH_THRESHOLD = 50;
const SCOPE_WINDOW = 3600;

async function detectScopeCreep(event: SecurityEvent) {
  if (event.event_name !== 'data.accessed') return;

  const key = `scope:${event.org_id}:${event.user_id}`;
  const resourceId = event.metadata?.resource_id;
  if (!resourceId) return;

  await redis.sadd(key, resourceId);
  await redis.expire(key, SCOPE_WINDOW);

  const uniqueResources = await redis.scard(key);

  if (uniqueResources >= SCOPE_BREACH_THRESHOLD) {
    await createAlert({
      type: 'excessive_data_access',
      severity: 'high',
      user_id: event.user_id,
      org_id: event.org_id,
      metadata: {
        unique_records_accessed: uniqueResources,
        window_seconds: SCOPE_WINDOW,
      },
    });
  }
}

4. Offboarding-Period Activity

The period between an employee's resignation announcement and their access termination is the highest-risk window for insider data theft. Access should be revoked on the day of departure at the latest.

Event signature:

data.export — within 14 days of an admin.user_offboarded or admin.user_deactivated event
admin.api_key_created — new API key created by a user whose account is scheduled for termination

Detection rule:

async function detectOffboardingRisk(event: SecurityEvent) {
  const highRiskEvents = ['data.export', 'admin.api_key_created', 'data.downloaded'];
  if (!highRiskEvents.includes(event.event_name)) return;

  // Check if there's a pending offboarding for this user
  const isPendingOffboard = await checkOffboardingStatus(
    event.user_id,
    event.org_id
  );

  if (isPendingOffboard) {
    await createAlert({
      type: 'offboarding_data_risk',
      severity: 'critical',
      user_id: event.user_id,
      org_id: event.org_id,
      metadata: {
        event_name: event.event_name,
        recommendation: 'Review activity and consider revoking access immediately.',
      },
    });
  }
}

5. Privilege Escalation Followed by Data Access

When a user's role is elevated and they immediately begin accessing sensitive data they couldn't reach before, that's either a compromised account or a malicious insider testing what they can now reach.

Event signature:

admin.role_assigned → data.export within 30 minutes (same user)
authz.permission_granted → data.accessed for sensitive resource type

This correlation between admin.* events and data.* events is one of the most reliable composite signals for insider threat detection.

Instrumenting Your Application with LiteSOC

Insider threat detection is only as good as the events you emit. Here's a minimal instrumentation checklist:

import { LiteSOC } from '@litesoc/node';

const litesoc = new LiteSOC({ apiKey: process.env.LITESOC_API_KEY });

// Every time a user exports data
await litesoc.track({
  event_name: 'data.export',
  user_id: userId,
  org_id: orgId,
  metadata: {
    export_type: 'csv',      // csv, json, pdf
    resource_type: 'contacts', // what was exported
    record_count: records.length, // CRITICAL for volume detection
    filter_applied: !!filterQuery,
  },
});

// Every time a user views a sensitive record
await litesoc.track({
  event_name: 'data.accessed',
  user_id: userId,
  org_id: orgId,
  metadata: {
    resource_type: 'customer_record',
    resource_id: customerId,
    access_method: 'ui', // ui, api, support_tool
  },
});

// Every file download
await litesoc.track({
  event_name: 'data.downloaded',
  user_id: userId,
  org_id: orgId,
  metadata: {
    file_type: 'pdf',
    file_name: sanitize(fileName), // never log raw user input
    file_size_bytes: fileSize,
  },
});

The record_count and resource_id fields in metadata are what make volume-based and scope-based detection possible. Without them, you can detect that an export happened, but not whether it was suspicious.

The Events Matrix for Insider Threat Detection

EventRisk CategoryThreshold
data.exportExfiltration>1,000 records, or 3+ exports/hour
data.downloadedExfiltration>50 downloads/hour
data.accessedScope creep>50 unique records/hour
admin.api_key_createdPersistenceAny during offboarding period
admin.role_assignedPrivilege abuseOutside business hours
authz.permission_deniedProbing>5 in 60 seconds
admin.user_deletedSabotageBy non-owner account

Separating Signal from Noise

The biggest challenge with insider threat detection is false positives. A sales leader legitimately exports the entire customer list for a board presentation. A developer runs a bulk export during a data migration. These look identical to malicious behavior in the event log.

The solution is behavioral baselining. Rather than alerting on absolute thresholds, alert on deviations from a user's established pattern:

  • A user who has exported 5,000 records every Monday for 3 months is not suspicious on Monday.
  • A user who has never exported more than 50 records and suddenly exports 10,000 on their last day is critical.

LiteSOC's 30-day behavioral baseline is designed for exactly this use case. Configure alert thresholds per-user rather than per-organization for the lowest false positive rate.

Responding to Insider Threat Alerts

Unlike external attack alerts that call for blocking and hunting, insider threat alerts require a careful, legally sound response:

  1. Do not immediately revoke access — Sudden revocation tips off the insider and may destroy forensic evidence of ongoing activity.
  2. Escalate to HR and Legal first — Insider threat investigations have legal implications. Loop in the right stakeholders before taking action.
  3. Preserve the audit trail — Ensure all events are retained and immutable. LiteSOC's audit log cannot be deleted by the organization under investigation.
  4. Scope the blast radius — Use the event log to identify exactly what data was accessed or exported. You'll need this for breach notification decisions.
  5. Review access grants — Audit who else has similar access levels and whether those are still appropriate.

Conclusion

Insider threats are fundamentally a data problem. You cannot detect what you don't measure. The good news is that the behavioral signals are consistent and detectable — bulk exports, after-hours access, scope creep, offboarding-period activity — they all appear in your event stream if you're emitting the right events.

The organizations that catch insider threats early are not the ones with the most sophisticated detection systems. They're the ones who decided to log record_count in their export events, who wired up data.accessed on their support tooling, and who built a 30-day baseline instead of relying on a static threshold.

Instrument first. Detect second. The signals are already there.

Stay Updated

Get the latest security insights and product updates delivered to your inbox. No spam, unsubscribe anytime.