Contact Center SLAs — What to Include, How to Set Targets, and What Happens When They Are Missed

A Service Level Agreement (SLA) in a contact center defines the performance standards that the operation must meet — and the consequences when it does not. In a single-client call center, the SLA is typically an internal commitment between operations and leadership. In a BPO, the SLA is a contractual obligation between the provider and the client, with financial penalties attached to misses.
The SLA is the link between operational metrics and commercial outcomes. An operation can track service level, AHT, quality scores, and every other metric — but without an SLA, those metrics are internal measurements with no external accountability. The SLA defines which metrics matter, what the targets are, how they are measured, and what happens when they are missed.
What belongs in a contact center SLA
An SLA has six components. Missing any one of them creates ambiguity that surfaces as a dispute later.
| Component | What it defines | Why it matters |
|---|---|---|
| Metric definitions | Exactly how each metric is calculated — the formula, the data source, and what is included/excluded | Without precise definitions, provider and client calculate the same metric differently and disagree about whether the target was met |
| Performance targets | The specific threshold for each metric (e.g., 80% of calls answered within 20 seconds) | Targets without definitions are unenforceable. Definitions without targets are academic |
| Measurement period | Whether targets are measured daily, weekly, or monthly — and how intervals are aggregated | A monthly service level target can be met while individual days miss badly. The measurement period determines accountability granularity |
| Reporting requirements | What reports the provider delivers, in what format, how frequently, and by what deadline | If the client cannot verify performance independently, the SLA is unenforceable |
| Penalty/credit mechanism | The financial consequence of missing a target — typically a credit against the monthly invoice | Without penalties, the SLA is aspirational rather than binding |
| Review and revision process | How often the SLA is reviewed, what triggers a revision, and who must agree to changes | Business needs change. An SLA that cannot be revised becomes either irrelevant or punitive |
SLA metrics and typical targets
Not every operational metric belongs in the SLA. SLA metrics should be outcomes that the client cares about, that the provider can control, and that can be measured objectively.
Service delivery metrics
These are the metrics that appear in nearly every contact center SLA.
| Metric | Definition | Typical SLA target | Measurement notes |
|---|---|---|---|
| Service level | % of calls answered within X seconds | 80/20 (80% within 20 sec) or 80/30 | Most common SLA metric. Measured at the interval, daily, or monthly level depending on the contract |
| Abandonment rate | % of calls where the caller hangs up before reaching an agent | Fewer than 5% | Exclude calls abandoned within 5 seconds (short abandons) — these are misdials, not service failures |
| Average speed of answer (ASA) | Average time callers wait before being connected to an agent | Fewer than 30 seconds | Correlated with service level but measures the average, not the distribution |
Quality metrics
| Metric | Definition | Typical SLA target | Measurement notes |
|---|---|---|---|
| QA score | Average score on evaluated calls against a standardized rubric | 85%+ | Requires agreement on the rubric, sample size (minimum 4–6 evaluations per agent per month), and calibration process |
| First call resolution (FCR) | % of calls resolved without the customer needing to call back | 70–75% | Define the lookback window — if the customer calls back within 7 days on the same issue, the original call is not FCR |
| Customer satisfaction (CSAT) | Score from post-call surveys | 4.0+ out of 5.0 or 80%+ top-box | Response rates are typically 5–15%. Low response rates make CSAT unreliable — set a minimum response rate requirement |
Efficiency metrics
| Metric | Definition | Typical SLA target | Measurement notes |
|---|---|---|---|
| AHT | Average handle time (talk + hold + ACW) | Varies by call type — typically 4–8 minutes | Always set AHT targets by call type, not as a single number. Pair with FCR to prevent agents from rushing calls |
| Adherence | % of time agents follow their assigned schedule | 90%+ | Typically an internal operational metric rather than a client SLA metric, but some BPO contracts include it |
| Occupancy | % of logged-in time agents spend handling calls or in after-call work | 75–85% | Not a target to maximize — occupancy above 85% causes agent burnout and increases attrition |
What not to put in the SLA
| Metric | Why it does not belong in the SLA |
|---|---|
| Calls per hour | An internal productivity metric — the client cares about whether calls are answered and resolved, not how many each agent handles |
| Attrition rate | Important operationally but not an outcome the client is buying. If attrition causes SLA misses, the SLA catches it through service level and quality metrics |
| Shrinkage | An internal workforce planning metric that the provider manages — not a client-facing commitment |
| Agent utilization | Similar to occupancy but even more internally focused. The client cares about outcomes, not how the provider allocates its workforce |
How to structure penalty and credit mechanisms
The penalty mechanism determines whether the SLA has teeth. Without financial consequences, a missed SLA is a data point in a report rather than a driver of corrective action.
Common penalty structures
| Structure | How it works | Pros | Cons |
|---|---|---|---|
| At-risk percentage | A fixed percentage of the monthly invoice (typically 5–15%) is "at risk." Misses reduce the payment by the at-risk amount | Simple, predictable for both parties | Does not scale with severity — a 1-point miss and a 10-point miss may trigger the same penalty |
| Tiered credits | Different penalty amounts for different levels of miss severity | Proportional to impact — a major miss costs more than a minor one | More complex to calculate and administer |
| Per-metric credits | Each SLA metric has its own penalty pool, so missing multiple metrics compounds | Prevents the provider from sacrificing one metric to protect another | Can result in disproportionate penalties when a single root cause (e.g., understaffing) causes multiple metric misses simultaneously |
Tiered credit example
For a BPO contract with a monthly invoice of $200,000 and 10% at risk ($20,000):
| Service level achieved | Credit | Monthly financial impact |
|---|---|---|
| 80%+ (target met) | None | $0 |
| 75–79% (minor miss) | 25% of at-risk | −$5,000 |
| 70–74% (moderate miss) | 50% of at-risk | −$10,000 |
| 65–69% (major miss) | 75% of at-risk | −$15,000 |
| Below 65% (critical miss) | 100% of at-risk | −$20,000 |
Penalty mechanism principles
Make penalties proportional to impact. A service level of 79% for one day is operationally different from 79% for an entire month. The measurement period and penalty structure should reflect this.
Cap penalties at the at-risk amount. Uncapped penalties create adversarial relationships and incentivize the provider to hide problems rather than report them.
Include a cure period. If the provider misses an SLA, give them a defined period (typically 30 days) to implement corrective action before penalties apply — but only for the first occurrence. Repeated misses should trigger penalties immediately.
Define exclusions. Specify events that suspend SLA measurement: client-caused issues (client system outage, client-requested process change during ramp), force majeure, and volume spikes beyond an agreed threshold (e.g., 40%+ above forecast). Without exclusions, the provider is penalized for things outside their control.
SLA reporting requirements
An SLA is only enforceable if the client can verify performance. The reporting section defines what the provider delivers and when.
| Report element | Specification |
|---|---|
| Frequency | Daily summary + weekly detail + monthly formal report |
| Delivery deadline | Daily by 10 AM next business day, weekly by Monday noon, monthly by the 5th of the following month |
| Format | Agreed template — typically a spreadsheet or dashboard with raw data accessible |
| Metrics included | All SLA metrics + supporting metrics (volume, AHT by call type, staffing levels) |
| Interval detail | Monthly report includes interval-level (30-minute) data for service level — not just the daily or monthly average |
| Narrative | Monthly report includes a narrative explaining any misses, root cause analysis, and corrective actions taken or planned |
| Data access | Client has direct access to ACD data or a real-time dashboard — not dependent solely on provider reports |
Why interval-level data matters: A monthly service level of 80% can mean consistent 80% every day — or it can mean 90% on low-volume days and 65% during peak hours. Interval-level data reveals whether the operation is scheduling correctly or whether the monthly average is masking daily failures.
Common SLA mistakes
Setting targets without defining measurement
The mistake: The SLA states "80/20 service level" but does not define whether abandoned calls are included in the denominator, whether short abandons are excluded, whether the target applies per interval, per day, or per month, or which ACD report produces the number.
The consequence: Provider and client look at different reports and get different numbers. The provider claims 81%. The client calculates 77%. Both are correct based on their definitions.
The fix: Define the exact formula, data source, exclusions, and measurement period for every metric. Example: "Service level = (calls answered within 20 seconds) / (calls answered + calls abandoned after 5 seconds), measured monthly, calculated from ACD report [specific report name]."
Using a single AHT target across all call types
The mistake: The SLA sets an AHT target of 360 seconds for the entire operation.
The consequence: A billing inquiry (180 seconds) and a technical troubleshooting call (600 seconds) have the same target. Agents rush complex calls to meet the target, reducing FCR. Or the blended average meets the target while individual call types are wildly off.
The fix: Set AHT targets by call type. Include a call-type mix assumption in the SLA — if the mix changes significantly (e.g., complex calls increase from 20% to 40% of volume), the blended AHT target should be renegotiated.
Penalizing without diagnosing
The mistake: The SLA has penalties for missing service level. The provider misses service level for 3 months. The client applies penalties each month but does not investigate why.
The consequence: The root cause might be that the client's volume forecast was 20% low, and the provider does not have enough approved headcount to cover the actual volume. Penalties do not fix the staffing gap.
The fix: Pair penalties with a structured review process. When an SLA is missed, require a root cause analysis and corrective action plan before the next measurement period. If the root cause is client-originated (inaccurate volume forecast, system issues, scope change), the miss should be classified differently.
Measuring monthly when problems are daily
The mistake: Service level is measured and reported monthly. The monthly target is met at 81%.
The consequence: Mondays and the first week after billing cycles run at 65% service level. The rest of the month runs at 88%. The monthly average masks a systematic pattern. Customers calling on Mondays have a consistently poor experience that the SLA does not capture.
The fix: Report at the daily level and set both a monthly target (80%) and a daily floor (no day below 70%). The daily floor ensures that the provider cannot sacrifice specific days to make the monthly number.
Omitting volume assumptions
The mistake: The SLA defines service level targets but does not specify the volume range for which those targets apply.
The consequence: The client's volume grows 30% over 6 months. The provider's staffing was sized for the original volume. Service level degrades. The client applies penalties. The provider argues that the volume increase was not part of the agreement.
The fix: Include a volume band in the SLA: "Targets apply for monthly volume between X and Y contacts. For volume exceeding Y by more than 15%, the parties will negotiate adjusted staffing and revised targets within 30 days."
How SLAs connect to operations
The SLA defines what the operation must achieve. The operational management process determines how to achieve it. Every SLA metric maps to specific operational functions:
| SLA metric | Operational function that drives it | What to fix when the metric misses |
|---|---|---|
| Service level | Workforce planning + scheduling + intraday management | Check forecast accuracy, shrinkage assumptions, schedule coverage by interval, intraday response time |
| Abandonment rate | Same as service level — abandonment is a consequence of insufficient staffing or long wait times | Same diagnostic as service level. If abandonment is high but service level is near target, check IVR design and queue messaging |
| QA score | Quality management — rubric design, evaluation process, coaching | Check inter-rater reliability (evaluators scoring consistently), coaching cadence, whether training gaps are being addressed |
| FCR | Agent training + knowledge management + system access | Check whether agents have the tools and authority to resolve issues on the first call without transfers or callbacks |
| AHT | Agent proficiency + system efficiency + call complexity | Decompose into talk, hold, and ACW and diagnose which component is elevated |
| CSAT | All of the above — CSAT is a downstream indicator of the entire operation | Do not try to fix CSAT directly. Identify which upstream metric is affecting the customer experience |
SLAs for BPO contracts
BPO operations have additional SLA complexity because the agreement is between two separate organizations with different economic incentives.
What BPO SLAs must address beyond standard metrics
| Element | What it defines | Why it is different from internal SLAs |
|---|---|---|
| Staffing model | Whether the client pays per agent (dedicated), per call (shared), or per hour | Determines who bears the risk of volume variability — per-agent models put volume risk on the client; per-call models put it on the provider |
| Ramp period | The timeline for new agents to reach full proficiency after training, during which SLA targets may be relaxed | Without a defined ramp, the provider is penalized for attrition replacement even when the new agents are performing as expected for their tenure |
| Volume commitments | Minimum and maximum volume the client commits to send | The provider staffs based on the commitment. If the client sends 40% less volume, the provider has idle agents. If the client sends 40% more, the provider cannot staff for it without lead time |
| Scope changes | How new call types, processes, or systems are added — including the training period and SLA adjustment | Scope creep without SLA adjustment is the most common source of BPO contract disputes |
| Billable utilization | What counts as billable time — calls only, or calls + training + coaching | Defines the economic model. If training is non-billable, the provider has a financial incentive to minimize it — which degrades quality |
| Termination triggers | How many consecutive months of SLA misses constitute grounds for contract termination | Protects the client from sustained poor performance and protects the provider from termination over a single bad month |
SLA review cadence for BPOs
| Review | Frequency | Participants | Agenda |
|---|---|---|---|
| Operational review | Weekly | Operations managers (both sides) | SLA performance, volume vs. forecast, staffing, open issues |
| Business review | Monthly | Account managers + operations | Monthly SLA scorecard, penalty/credit calculation, corrective action status, upcoming changes |
| Strategic review | Quarterly | Leadership (both sides) | Contract health, volume trends, scope changes, pricing adjustments, relationship assessment |
| SLA revision | Annually (or triggered by material change) | Legal + operations + leadership | Target adjustments, metric additions/removals, penalty structure changes, volume band updates |
