HiveDesk
<- Back to Blog

Call Center KPIs — What to Track, What Targets to Set, and How They Interact

Vik Chadha
Vik Chadha · · Updated · 12 min read
Call Center KPIs — What to Track, What Targets to Set, and How They Interact

Every call center tracks KPIs. The problem is rarely that metrics are not being collected — it is that they are being collected without understanding how they interact, what targets are realistic, or how optimizing one metric can damage another. A call center that pushes agents to reduce handle time will see AHT drop and FCR drop with it — because agents are rushing calls instead of resolving them. The dashboard shows improvement in one number while the operation gets worse.

The purpose of KPIs is not to generate scores. It is to provide the information needed to make good staffing, coaching, process, and investment decisions. That requires understanding what each metric actually measures, what a realistic target looks like, and which metrics should take priority when they conflict.

The metrics that matter

Service level

What it measures: The percentage of calls answered within a target time threshold. Typically expressed as "X% of calls answered within Y seconds" — for example, 80/20 means 80% of calls answered within 20 seconds.

How it is calculated: (Calls answered within threshold ÷ total calls offered) × 100

Why it matters: Service level is the primary measure of whether you have enough agents on the phones to meet demand. It directly affects customer wait times, abandonment rates, and agent workload.

Call center typeCommon targetAggressive target
General customer service80/20 (80% in 20 sec)80/15
Technical support80/3080/20
Sales / revenue-generating90/1590/10
Emergency / critical support90/1095/10

What to watch for: Service level is a staffing metric, not an agent performance metric. If service level is consistently below target, the answer is almost always more agents or better scheduling — not asking existing agents to work faster. Pressuring agents to shorten calls to improve service level trades quality for speed.

How it interacts with other metrics: Service level and occupancy are inversely related. Higher service levels require lower occupancy (more agents available, meaning each agent spends less time on calls). A service level of 90/10 requires significantly more agents than 80/30 — the cost difference can be substantial.

First-call resolution (FCR)

What it measures: The percentage of customer issues resolved during the first contact, without the customer needing to call back.

How it is calculated: There are two common methods:

  • Repeat contact method: Track whether a customer contacts again within 7 days about the same issue. If they do, the first contact was not a resolution. This is more accurate but requires tracking infrastructure.
  • Agent disposition method: The agent marks the call as "resolved" in the system. This is easier but less reliable because agents may mark calls resolved when the customer's actual problem persists.
Call center typeTypical FCRStrong FCR
General customer service70–75%80%+
Technical support60–70%75%+
Billing / account inquiries80–85%90%+
Complex product support55–65%70%+

Why it matters: FCR is the single most important quality metric because it captures what customers care about — getting their problem solved — and directly affects cost. Every call that is not resolved on the first contact generates at least one repeat contact, doubling the labor cost for that issue.

How it interacts with other metrics: FCR and AHT have a natural tension. Resolving issues thoroughly on the first call often takes longer than rushing through a call and hoping the problem does not recur. A 5-minute call that resolves the issue costs less than a 3-minute call that generates a callback. Track FCR and AHT together — agents with low AHT and low FCR are rushing; agents with high AHT and high FCR may need help with efficiency but are doing the right thing.

Average handle time (AHT)

What it measures: The average total time spent on a customer interaction, including talk time, hold time, and after-call work (ACW).

How it is calculated: (Total talk time + total hold time + total ACW) ÷ number of calls handled

Call center typeTypical AHTNotes
General customer service4–6 minutesVaries widely by product complexity
Technical support8–15 minutesComplex troubleshooting extends calls
Billing / simple inquiries2–4 minutesTransactional, quick resolution
Sales5–10 minutesLonger calls often = better outcomes

Why it matters: AHT is an efficiency metric that drives staffing requirements. Lower AHT means fewer agents are needed to handle the same call volume — or more calls can be handled by the same number of agents.

Why it is dangerous as a target: AHT is the most misused metric in call centers. When agents are pressured to reduce handle time, they take shortcuts: skipping verification steps, giving quick answers instead of correct ones, rushing customers off the phone, and minimizing after-call documentation. All of these reduce AHT while damaging quality, FCR, and customer satisfaction.

How to use it correctly: Track AHT as a diagnostic tool, not a performance target. Use it to identify:

  • Agents whose AHT is significantly higher than peers on the same call type — they may need coaching on efficiency (system navigation, call control) without sacrificing quality
  • Call types with unusually high AHT — these may indicate a process problem, a knowledge gap, or a system issue
  • AHT trends over time — a gradual increase across the team may indicate a product change, system slowdown, or increased call complexity

Customer satisfaction (CSAT)

What it measures: How the customer rated their interaction, typically on a 1–5 scale collected via post-call survey (IVR, SMS, or email).

How it is calculated: (Number of satisfied responses ÷ total responses) × 100. "Satisfied" is typically defined as 4 or 5 on a 5-point scale.

Call center typeTypical CSATStrong CSAT
General customer service75–80%85%+
Technical support70–75%80%+
Billing / account services75–85%85%+

Limitations:

  • Low response rates. Post-call survey response rates are typically 5–15%. The sample is not representative — respondents skew toward extremes (very happy or very unhappy).
  • Recency bias. The survey captures how the customer felt at the end of the call, not whether their issue was actually resolved. An agent who is friendly but gives wrong information may score high on CSAT while generating a repeat contact.
  • Individual scores are unreliable. A single CSAT rating tells you almost nothing. Trends over 30+ responses per agent are meaningful; individual scores are noise.

How to use it correctly: Track CSAT at the team and center level as a trend indicator. At the agent level, use it as one input alongside quality scores and FCR — not as a standalone metric.

Schedule adherence

What it measures: Whether agents are logged in and available at the times they are scheduled to work.

How it is calculated: (Time in adherence ÷ total scheduled time) × 100. "In adherence" means the agent's actual state (available, on call, on break) matches their scheduled state within a tolerance window (typically 5 minutes).

Target: 90–95%. Below 90% indicates a systemic problem — either schedules are not communicated clearly, agents are not following them, or supervisors are not managing adherence.

Why it matters: Schedule adherence is the bridge between workforce management planning and actual service delivery. Your WFM team forecasts call volume, calculates the number of agents needed per interval, and builds schedules to hit service level targets. If agents do not follow those schedules, the forecasting is irrelevant — you will be over or understaffed regardless of how accurate the forecast was.

What to watch for: Do not confuse adherence with conformance. Adherence measures whether agents are in the right state at the right time. Conformance measures whether agents worked their total scheduled hours. An agent who takes their lunch 30 minutes late has low adherence (wrong state at the scheduled time) but may have perfect conformance (worked the same total hours).

Quality score

What it measures: How well agents follow defined standards during customer interactions, as evaluated through QA reviews.

How it is calculated: Weighted score across evaluation criteria (resolution accuracy, customer handling, communication, process compliance, efficiency). See our quality improvement guide for scorecard design.

Target: 80–85% minimum. Agents consistently below 75% should be on a performance improvement plan. Agents consistently above 90% should be recognized and used as coaching examples.

Sample size: Minimum 4–6 calls per agent per month for a statistically meaningful score. Fewer than that and a single bad call skews the entire score.

Occupancy (utilization)

What it measures: The percentage of logged-in time agents spend on call-related activity (talk time, hold time, after-call work) versus waiting for the next call.

How it is calculated: (Total handle time ÷ total logged-in time) × 100

Target: 75–85%. This is not a "higher is better" metric.

OccupancyWhat it means
Below 70%Agents are idle too much — overstaffed or low call volume
70–80%Healthy range — agents have recovery time between calls
80–85%Busy but manageable — limited recovery time
85–90%Approaching unsustainable — burnout risk increases sharply
Above 90%Back-to-back calls, no recovery — quality drops, turnover rises

Why it is dangerous above 85%: At 90% occupancy over an 8-hour shift, agents get approximately 48 minutes of non-call time — which must cover breaks, bathroom visits, after-call work, and mental recovery. This is not enough to sustain quality performance. If your occupancy is consistently above 85%, you are understaffed, not efficient.

Agent retention rate

What it measures: The percentage of agents who remain employed during a given period. See our retention calculation guide for the formula and segmentation approach.

Target: 75%+ annual retention (equivalent to under 25% annual turnover). Most call centers operate at 55–70% retention, which means they are replacing 30–45% of their workforce every year.

Why it belongs on the KPI dashboard: Retention is not an HR metric — it is an operational metric. Low retention degrades quality (higher proportion of inexperienced agents), increases costs (recruiting and training), and creates chronic overtime as vacancies strain the remaining workforce.

Cost per call

What it measures: The total cost of handling one customer interaction.

How it is calculated: Total operating cost ÷ total calls handled. For a more precise figure, use loaded hourly cost ÷ calls per agent per hour.

Call center typeTypical cost per call
Simple inquiries (billing, order status)$2.50–$5.00
General customer service$4.00–$7.00
Technical support$7.00–$15.00
Complex / specialized support$12.00–$25.00

Why it matters: Cost per call connects operational performance to financial outcomes. It is affected by every other metric — AHT, FCR (repeat calls double the cost), staffing levels, turnover (training costs amortized across fewer calls), and schedule adherence (paid time not spent on calls).

How metrics interact

Understanding metric interactions prevents the most common management mistake: optimizing one number while unknowingly damaging another.

If you push this down......this often goes up
AHTRepeat contacts, complaints, agent stress
Cost per call (through understaffing)Occupancy, burnout, turnover, overtime
Occupancy (by overstaffing)Cost per call, idle time
Abandonment rate (by overstaffing)Cost per call
If you push this up......watch for impact on...
Service levelCost (requires more agents), occupancy goes down
FCRAHT may increase (agents spending more time per call)
Quality scoresAHT may increase if agents are being more thorough
Schedule adherenceAgent satisfaction (if enforced rigidly without flexibility)

The metrics that should never be sacrificed for efficiency gains: FCR, quality score, and retention. These are the metrics that drive customer outcomes and long-term operational health. AHT, cost per call, and occupancy are efficiency metrics that should be optimized within the constraints set by quality and retention.

Common measurement mistakes

Using AHT as a performance target. AHT should inform coaching conversations and identify process problems — not determine whether an agent is performing well. An agent with 6-minute AHT and 85% FCR is outperforming an agent with 4-minute AHT and 60% FCR.

Tracking metrics without segmentation. Center-wide averages hide problems. Track every metric by team, shift, account, call type, and agent tenure. A center-wide FCR of 72% is meaningless if it is composed of 85% FCR on billing calls and 55% on technical calls — the technical queue has a problem that the average obscures.

Setting targets without understanding cost tradeoffs. Improving service level from 80/20 to 90/10 sounds like a reasonable goal, but it may require 15–20% more agents. If the cost of those additional agents exceeds the value of the improved customer experience, it is not a good investment. Every target should have a cost calculation behind it.

Measuring everything, acting on nothing. Some call centers generate 30-page weekly reports full of metrics that no one reads or acts on. Track 8–10 metrics that drive decisions. If a metric does not change how you staff, coach, or invest, stop tracking it.

Ignoring the agent experience metrics. Retention, occupancy, and schedule adherence are agent experience metrics. If they are deteriorating while customer-facing metrics look fine, you are running on borrowed time — the agent satisfaction problems will show up in customer metrics within weeks or months.

Vik Chadha

About the Author

Vik Chadha

Founder of HiveDesk. Has been helping businesses manage remote teams with time tracking and workforce management solutions since 2011.

Try HiveDesk Free for 14 Days

Increase productivity, take screenshots, track time and cost, and bring accountability to your team. $5/user/month, all features included.