It’s 2:47 AM. Your monitoring system fires an alert: bandwidth utilization has jumped from 2 Gbps to 85 Gbps in thirty seconds. Your website is unreachable. Customers are calling. Your CEO’s phone is ringing.
Who do you call? What’s the first action? Who has authority to make filtering decisions? Where’s the vendor’s emergency contact number? What escalation path applies at 3 AM on a Saturday?
If the answer to any of these questions is “I don’t know” or “we’ll figure it out,” you don’t have a DDoS response plan. You have a hope strategy. And hope is not a mitigation technique.
Why You Need a Written Plan
Every organization believes they’ll respond rationally under pressure. The reality is different. During an active DDoS attack:
- Adrenaline impairs judgment. Engineers make reactive decisions instead of following procedures
- Communication breaks down. Multiple teams work in parallel without coordination
- Institutional knowledge is unavailable. The one person who knows the firewall vendor’s escalation process is on vacation
- Time pressure leads to mistakes. A misconfigured rate limit blocks legitimate traffic, turning a 30-minute attack into a 4-hour outage
A written, rehearsed response plan eliminates these failure modes. When the attack hits, every team member opens the same document and follows the same procedures. Decisions are pre-made. Contacts are pre-listed. Escalation criteria are pre-defined.
The Six Components of a DDoS Response Plan
1. Contact Directory
Before anything else, document every person and organization involved in DDoS response:
Internal contacts:
- Network operations team (24/7 phone numbers, not just email)
- Security team lead
- Infrastructure manager
- Executive escalation path (CTO → CEO → Board)
- Communications team (for external messaging)
External contacts:
- DDoS mitigation provider — emergency hotline, portal URL, account ID
- ISP/upstream provider — NOC phone number, account manager
- DNS provider — support contact
- CDN provider — emergency support
- Domain registrar — in case DNS needs urgent changes
For each contact, document:
- Name
- Role
- Phone number (primary + backup)
- Availability hours
- Authority level (who can authorize blackholing, who can authorize failover)
2. Detection and Classification
Define how attacks are detected and how severity is classified. This section must answer:
How do we know we’re under attack?
- Automated monitoring alerts (bandwidth threshold, PPS threshold, error rate threshold)
- CoreDetection™ notifications (webhook, email, portal alert)
- Customer complaints pattern
- CDN/WAF alert notifications
- Manual observation of traffic dashboards
How do we classify severity?
| Severity | Description | Response Time | Example |
|---|---|---|---|
| P1 — Critical | Service completely offline, revenue impact | Immediate (< 5 min) | 100+ Gbps flood, all services down |
| P2 — Major | Significant degradation, some services affected | Within 15 min | Partial service disruption, elevated latency |
| P3 — Minor | Detectable attack, no user impact | Within 1 hour | Small flood absorbed by existing rules |
| P4 — Informational | Unusual traffic pattern, no impact | Next business day | Probing activity, reconnaissance |
What constitutes “normal” traffic? Document your baseline metrics:
- Normal bandwidth (in/out) by time of day
- Normal PPS by protocol
- Normal connection rates
- Normal error rates
- Seasonal variations (sales events, launches)
Without a documented baseline, you can’t distinguish an attack from a traffic spike.
3. Immediate Response Procedures
Step-by-step actions for the first 30 minutes. This is the most critical section.
Step 1: Confirm the attack (0-2 minutes)
- Check monitoring dashboards for bandwidth/PPS anomaly
- Check CoreTech Client Portal for attack classification
- Verify the issue isn’t caused by an internal change (deployment, configuration update)
- Classify severity (P1-P4)
Step 2: Notify the response team (2-5 minutes)
- Alert the designated on-call engineer
- For P1/P2: activate the response team (primary + backup)
- For P1: notify executive escalation chain
- Log the incident start time and initial classification
Step 3: Verify mitigation is engaged (5-10 minutes)
- Check CoreTech Client Portal — is CoreDetection™ mitigating?
- Verify pre-configured firewall rules are active
- Check if attack traffic is being filtered (clean traffic graphs should show normal levels)
- If automated mitigation is insufficient, proceed to manual intervention
Step 4: Manual intervention if needed (10-20 minutes)
- Contact CoreTech emergency support if automated rules aren’t effective
- Add temporary firewall rules (GeoIP blocks, rate limits, protocol filters)
- Consider activating emergency rule templates/bundles
- If attack exceeds mitigation capacity, coordinate with upstream ISP
Step 5: Stabilize and monitor (20-30 minutes)
- Confirm services are recovering
- Monitor for attack vector changes (attackers often adapt when initial vector is mitigated)
- Adjust filtering rules based on observed attack patterns
- Begin documenting the incident
4. Communication Templates
Pre-written messages eliminate composing under pressure:
Internal status update (every 30 minutes during P1/P2):
Subject: [ACTIVE] DDoS Incident — Update #[N] Status: [Mitigating / Escalated / Resolved] Started: [timestamp] Current impact: [description] Actions taken: [list] Next steps: [plan] ETA to resolution: [estimate]
Customer notification (if services are impacted):
We are currently experiencing elevated traffic levels that are affecting service availability. Our security team is actively mitigating the situation. We expect services to normalize within [timeframe]. Updates will be posted at [status page URL].
Post-incident customer communication:
The service disruption on [date] was caused by a Distributed Denial-of-Service (DDoS) attack targeting our infrastructure. The attack was detected at [time] and fully mitigated by [time]. Total customer-facing impact duration was [duration]. No customer data was compromised.
5. Post-Incident Analysis
Every attack is a learning opportunity. Within 48 hours of resolution:
Incident report template:
- Attack start time, detection time, mitigation time, resolution time
- Attack vectors (type, volume, duration, source distribution)
- Effectiveness of automated mitigation (what worked, what didn’t)
- Manual actions taken and their effectiveness
- Customer impact (duration, affected services, SLA implications)
- Timeline of communications (internal and external)
- Root cause analysis (if applicable)
- Action items (what to improve before the next attack)
Key metrics to measure:
- Time to Detect (TTD): How long between attack start and first alert?
- Time to Mitigate (TTM): How long between detection and effective mitigation?
- Customer Impact Duration: Total time services were degraded or offline?
- False Positive Rate: Did mitigation rules block legitimate traffic?
Common post-incident actions:
- Update firewall rules based on attack patterns
- Adjust CoreDetection™ sensitivity if detection was slow
- Add new rate limiting rules for exploited protocols
- Update contact directory if any contacts were unreachable
- Schedule a response plan rehearsal
6. Testing and Rehearsal
A response plan that’s never been tested is a document, not a plan. Schedule quarterly rehearsals:
Tabletop exercise (quarterly): Walk the response team through a simulated attack scenario. Present the scenario, follow the plan step by step, identify gaps.
Contact verification (monthly): Confirm all phone numbers, email addresses, and portal credentials are current. One invalid phone number at 3 AM can add 30 minutes to response time.
Rule testing (after every major change): When you update firewall rules, verify they work as expected. Use CoreTech’s Client Portal to review rule configurations and validate against current traffic patterns.
Building Your Plan: The CoreTech Advantage
CoreTech’s architecture simplifies several components of your response plan:
Detection is automated. CoreDetection™ monitors traffic behaviorally and classifies attacks within seconds. You don’t need to build complex monitoring thresholds — the AI behavioral engine handles detection.
Mitigation is pre-configured. Firewall rules set through the Client Portal engage instantly when an attack begins. Your “Step 3” becomes “verify mitigation is working” instead of “figure out what to filter.”
Visibility is real-time. During an attack, the Client Portal shows bandwidth, PPS, attack vectors, source countries, severity classification — everything your response team needs to make informed decisions.
Communication is proactive. Webhook integrations notify your team via Slack, PagerDuty, or email the moment CoreDetection™ detects an anomaly. Your team is alerted before customers notice.
Post-incident data is comprehensive. Historical attack data — volume, vectors, duration, source distribution — is available through the portal for post-incident analysis without external forensic tools.
The Minimum Viable Plan
If building a full plan feels overwhelming, start with these four essentials:
- Contact list — who to call, in what order, at any hour
- Severity classification — P1/P2/P3/P4 with clear definitions
- First 10 minutes checklist — the exact steps your on-call engineer follows
- Pre-configured mitigation — CoreEdge™ firewall rules deployed before the first attack
These four elements cover 80% of DDoS response effectiveness. The communication templates, post-incident analysis, and rehearsal schedule are important — but they’re useless without the first four.
10-day free trial — set up your pre-configured mitigation, establish your baseline, and test your response plan with real CoreEdge™ and CoreDetection™ capabilities.
Want to see this in action?
Get a live demonstration of CoreTech's DDoS mitigation platform.


