The First 24 Hours After a Security Incident: A Startup Playbook
When a security incident hits, every minute counts. Here's exactly what to do in the first 24 hours—even if you don't have a security team.

It's 2 AM. Your phone buzzes. A customer just emailed: "Why am I getting password reset emails I didn't request?"
Your stomach drops.
This is the moment every founder dreads. You might be dealing with a security incident—and you have no idea what to do next.
Take a breath. We've got you.
This is the playbook for the first 24 hours after discovering a potential security incident. It's designed for startups without dedicated security teams, and it will help you contain damage, preserve evidence, and communicate effectively.
Hour 0-1: Confirm and Contain
Don't Panic (But Move Fast)
Your first instinct might be to immediately shut everything down. Resist it. Rash actions can:
- Destroy evidence you'll need later
- Alert attackers that they've been discovered
- Cause more damage than the incident itself
Instead, take 5 minutes to assess.
Confirm It's Actually an Incident
Not every anomaly is a breach. Ask yourself:
- Is this a real security event or a false positive?
- Could this be explained by legitimate user behavior?
- Is there corroborating evidence?
Signs it's real:
- Multiple users reporting the same issue
- Unusual patterns in your security logs
- Access from unexpected locations or IPs
- Data that shouldn't exist in certain places
Assemble Your Incident Team
Even at a 5-person startup, you need defined roles:
| Role | Responsibility | Who |
|---|---|---|
| Incident Lead | Coordinates response, makes decisions | CTO or most senior engineer |
| Technical Lead | Investigates and contains | Senior engineer |
| Communications | Updates stakeholders | CEO or founder |
| Scribe | Documents everything | Anyone available |
The scribe role is critical. You'll need a timeline later.
Initial Containment
Based on what you know, take the minimum actions needed to stop active damage:
If credentials are compromised:
- Force password reset for affected accounts
- Revoke active sessions
- Temporarily disable affected accounts if needed
If an API key is exposed:
- Rotate the key immediately
- Check logs for unauthorized usage
- Assess what data the key could access
If there's active unauthorized access:
- Block the attacking IP (but save it first)
- Revoke the compromised access method
- Do NOT delete anything yet
# Example: Block an IP while preserving evidence
# First, log it
echo "$(date): Blocking IP 203.0.113.42 - unauthorized access attempt" >> incident-log.txt
# Then block (example for AWS security group)
aws ec2 revoke-security-group-ingress \
--group-id sg-xxx \
--protocol tcp \
--port 443 \
--cidr 203.0.113.42/32
Hour 1-4: Investigate
Preserve Evidence
Before you dig in, make sure you're not destroying evidence:
- Don't delete logs (even if they contain sensitive data)
- Don't reboot servers unless absolutely necessary
- Take snapshots of affected systems
- Export relevant logs to a separate location
# Export logs before they rotate
cp /var/log/auth.log /incident-evidence/auth-$(date +%Y%m%d).log
cp /var/log/nginx/access.log /incident-evidence/nginx-$(date +%Y%m%d).log
Build a Timeline
This is where security event logging pays off. You need to answer:
- When did the incident start?
- What was the initial access vector?
- What did the attacker do after gaining access?
- What data or systems were affected?
If you're using LiteSOC, pull your event timeline:
// Get all events for the affected user in the last 7 days
const events = await litesoc.events.list({
actor_id: affectedUserId,
start_date: sevenDaysAgo,
end_date: now,
});
// Look for the anomaly that started it all
const anomalies = await litesoc.alerts.list({
status: 'open',
start_date: sevenDaysAgo,
});
Key Questions to Answer
For authentication incidents:
- When was the last legitimate login?
- What IP addresses accessed the account?
- Was MFA bypassed or was it not enabled?
- Were any settings changed after the suspicious login?
For data exposure:
- What data was accessed?
- Was data exported or downloaded?
- Who had access to the exposed data?
- How long was it exposed?
For API key compromise:
- When was the key last used legitimately?
- What endpoints were called with the compromised key?
- Was any data modified or exfiltrated?
- Where was the key exposed (git commit, log file, etc.)?
Determine the Blast Radius
You need to understand the scope:
- Users affected: 1? 100? All of them?
- Data exposed: Emails? Passwords? Payment info?
- Systems compromised: Just the app? Database? Infrastructure?
- Business impact: Service disruption? Data loss? Reputational damage?
Document everything. You'll need this for stakeholder communication and potentially for regulators.
Hour 4-8: Communicate
Internal Communication
Your team needs to know what's happening:
What to share:
- We're investigating a security incident
- Here's what we know so far
- Here's what we're doing about it
- Here's how to escalate if you notice anything related
What NOT to share (yet):
- Speculation about attackers
- Unconfirmed scope or impact
- Blame for individuals
Customer Communication
This is where most startups mess up. The instinct is to either:
- Say nothing and hope it goes away
- Over-share in a panic
Both are wrong.
When to notify customers:
- If their data was accessed or exposed
- If they need to take action (change passwords, etc.)
- If the service is degraded due to containment measures
Template for initial customer communication:
Subject: Security Notice - Action Required
Hi [Customer],
We detected suspicious activity on your account on [date].
As a precaution, we've [action taken - e.g., reset your password].
What you should do:
1. [Specific action they need to take]
2. [Another action if applicable]
What we're doing:
- Investigating the scope of the activity
- Implementing additional security measures
- We'll update you within [timeframe]
If you have questions, contact us at [security email].
[Your name]
What NOT to do:
- Don't minimize ("a minor incident")
- Don't speculate about attackers
- Don't promise things you can't deliver
- Don't use legal jargon that confuses people
Legal and Regulatory
Depending on your situation, you may have legal obligations:
GDPR (if you have EU users):
- 72-hour notification requirement to supervisory authority
- User notification if high risk to rights and freedoms
US State Laws (California, etc.):
- Breach notification requirements vary by state
- Generally 30-60 days, but check your specific requirements
Industry Specific:
- HIPAA (healthcare): 60 days for breach notification
- PCI-DSS (payment cards): Immediate notification to card brands
Our advice: If the incident involves personal data, loop in legal counsel within the first 8 hours. Don't wait.
Hour 8-16: Remediate
Fix the Root Cause
Containment stops the bleeding. Remediation fixes the wound.
Common root causes and fixes:
| Root Cause | Fix |
|---|---|
| Weak password | Enforce password requirements, implement MFA |
| Exposed API key | Rotate key, add secrets scanning to CI/CD |
| SQL injection | Parameterized queries, input validation |
| Session hijacking | Implement proper session management, add fingerprinting |
| Phishing | Security awareness training, implement DMARC |
| Excessive permissions | Implement least privilege, regular access reviews |
Implement Additional Controls
What would have detected this faster? Implement it now:
- Add security event logging if you don't have it
- Enable alerting for the attack pattern you just experienced
- Add monitoring for indicators of compromise
- Review and tighten access controls
Validate the Fix
Before declaring victory:
- Test that the vulnerability is actually fixed
- Verify containment measures can be safely removed
- Check that normal operations are restored
- Monitor for signs of attacker persistence
Hour 16-24: Document and Learn
Create an Incident Report
Your incident report should include:
-
Executive Summary
- What happened (one paragraph)
- Impact (users affected, data exposed)
- Current status
-
Timeline
- When incident started
- When detected
- Key actions taken and when
- When resolved
-
Root Cause Analysis
- How did the attacker get in?
- Why wasn't it detected sooner?
- What made the attack possible?
-
Impact Assessment
- Data accessed/exposed
- Users affected
- Business impact
- Regulatory implications
-
Remediation Actions
- Immediate fixes
- Long-term improvements
- Timeline for implementation
-
Lessons Learned
- What went well
- What could improve
- Action items with owners and deadlines
The Blameless Post-Mortem
Within 48-72 hours, hold a post-mortem meeting. The goal isn't to assign blame—it's to improve.
Questions to discuss:
- How did we detect the incident?
- Could we have detected it faster?
- Did our response process work?
- What tools or information were missing?
- How do we prevent this specific issue?
- How do we prevent this class of issues?
Output: Concrete action items with owners and deadlines.
The Checklist
Print this out. Put it somewhere accessible.
Immediate (Hour 0-1)
- Confirm the incident is real
- Assemble incident team
- Assign roles (Lead, Technical, Comms, Scribe)
- Initial containment actions
- Start the incident log
Investigation (Hour 1-4)
- Preserve evidence (logs, snapshots)
- Build incident timeline
- Determine blast radius
- Identify root cause
Communication (Hour 4-8)
- Notify internal team
- Assess customer notification needs
- Draft customer communication
- Consult legal if personal data involved
Remediation (Hour 8-16)
- Fix root cause
- Implement additional controls
- Validate fixes
- Safely remove containment measures
Documentation (Hour 16-24)
- Complete incident report
- Schedule post-mortem
- Update runbooks and procedures
- Assign follow-up action items
Prevention Is Cheaper Than Response
The best incident is the one that never happens. Here's what to do before you're in crisis mode:
- Implement security event logging — You can't investigate what you didn't record
- Set up alerting — Detect anomalies before customers report them
- Enable MFA everywhere — Most account takeovers become impossible
- Practice incident response — Run a tabletop exercise quarterly
- Have a communication plan ready — Draft templates before you need them
Nobody wants to deal with a security incident at 2 AM. But with the right preparation and a clear playbook, you can contain damage, communicate effectively, and come out stronger.
Need help with security event logging and anomaly detection? LiteSOC gives you the visibility to detect incidents fast and the data to investigate them thoroughly.
Stay Updated
Get the latest security insights and product updates delivered to your inbox. No spam, unsubscribe anytime.