HiveDesk
<- Back to Blog

Workforce Analytics in Call Centers — What to Measure

Vik Chadha
Vik Chadha · · Updated · 13 min read
Workforce Analytics in Call Centers — What to Measure

Workforce analytics in a call center is the practice of using operational data to identify problems, make decisions, and measure whether those decisions worked. It is not a separate function — it is the data layer underneath workforce planning, scheduling, quality management, and cost management.

The difference between an operation that uses analytics and one that does not is the difference between diagnosing a problem and guessing at it. A supervisor who notices service level is low can react by offering overtime. A supervisor who sees that service level drops every Tuesday between 10:00 and 11:30 because the forecast under-predicts Tuesday morning volume by 18% can fix the forecast — and the problem stops recurring.

Most call centers collect far more data than they use. The ACD produces interval-level data on every metric. The time tracking system records hours by agent, by day. The QA system stores evaluation scores. The challenge is not collecting data — it is knowing which data to look at, how often, and what to do when the numbers are off.

The four categories of workforce analytics

Every analytics use case in a call center falls into one of four categories. Each answers a different question and drives a different type of decision.

CategoryQuestion it answersData sourcesDecision it drives
Staffing analyticsDo I have enough people?Volume data, forecast accuracy, headcount, attrition, shrinkageHiring, overtime approval, headcount planning
Scheduling analyticsAre my people in the right place at the right time?Service level by interval, adherence, occupancy by shift, absence ratesSchedule redesign, break staggering, shift adjustments
Performance analyticsAre my people doing the work correctly and efficiently?AHT by agent and call type, FCR, QA scores, agent scorecardsCoaching assignments, training needs, process changes
Cost analyticsIs the operation financially sustainable?Labor cost, overtime, cost per call, attrition cost, back office costBudget decisions, headcount vs. overtime trade-offs, vendor negotiation

Staffing analytics: the data that drives headcount decisions

What to track

MetricHow to calculateFrequencyWhat it tells you
Forecast accuracy100 − |((Actual − Forecast) / Forecast) × 100|WeeklyWhether the staffing plan is built on reliable volume predictions. Below 85% = the schedule will be systematically wrong
Forecast biasAverage of (Actual − Forecast) across intervalsWeeklyWhether you consistently over-predict (positive bias = overstaffed) or under-predict (negative bias = understaffed + overtime)
Shrinkage (actual vs. planned)(Scheduled hours − productive hours) / scheduled hoursMonthlyWhether the staffing calculation uses the right shrinkage assumption. If actual shrinkage is 32% but the plan assumes 25%, every interval is understaffed
Attrition rateDepartures in period / average headcountMonthlyWhether the hiring pipeline is keeping up. Each percentage point of monthly attrition requires replacing 1 agent per 100 headcount per month
Time to proficiencyWeeks from go-live to reaching production targetsPer training classWhether training is producing competent agents on schedule. If ramp takes 12 weeks instead of 8, the staffing gap lasts a month longer than planned

How to use it

FindingDecision
Forecast under-predicts by 10%+ consistentlyIncrease the forecast by the bias amount. If Monday mornings are always 15% above forecast, build that into the Monday forecast
Actual shrinkage exceeds planned by 5+ pointsRecalculate required staff using actual shrinkage. This will increase the number of agents scheduled per interval
Attrition rate exceeds 4% per monthThe hiring pipeline needs to produce replacements faster. Calculate: monthly departures + growth hires = monthly hiring target. Account for 7-week recruiting + training lead time
Overtime exceeds 5% of total hours for 3+ consecutive weeksThe operation is structurally understaffed. Hiring is cheaper than sustained overtime at 1.5x rate

Scheduling analytics: the data that fixes coverage gaps

What to track

MetricHow to calculateFrequencyWhat it tells you
Service level by interval% of calls answered within threshold, per 30-minute intervalDailyWhich specific intervals are missing — the same intervals every day indicate a schedule gap, not a random variation
Occupancy by shift(Handle time) / (handle time + available time), per shiftWeeklyWhether workload is distributed evenly. Occupancy above 85% on one shift and below 70% on another = agents are in the wrong shifts
Adherence by agent% of time agent is in the correct state per scheduleWeeklyWhether agents are following the schedule. Below 90% = agents are going on break early/late, logging in late, or spending time in incorrect states
Net staffing by intervalAgents logged in and available − agents required per staffing calculationDailyWhether the actual agent count matches the plan. Negative net staffing = understaffed. Persistent positive = overstaffed (wasted labor cost)
Break overlap% of agents on break simultaneously in any intervalWeeklyWhether breaks are staggered. If 20% of agents are on break simultaneously, service level will drop during those intervals

How to use it

FindingDecision
Same intervals miss service level every day (e.g., 10:00–11:30 AM)The schedule does not match the volume curve. Add agents to those intervals — stagger shift starts, add a mid-morning split shift, or move agents from an overstaffed interval
Occupancy is 88% on evening shift, 68% on day shiftToo many agents on day shift relative to volume, too few on evening. Rebalance shift assignments
Adherence drops on specific days or shiftsCheck whether break times are realistic for those shifts. If agents consistently take breaks 15 minutes late, the break schedule may conflict with call patterns
Service level recovers after break windows endBreaks are clustered rather than staggered. Spread breaks so no more than 10–15% of agents are off the phones simultaneously

Performance analytics: the data that drives coaching

What to track

MetricSegmentationFrequencyWhat it tells you
AHTBy agent and by call typeWeeklyWhich agents are slow on which call types. A blended AHT average hides call-type-specific problems
FCRBy agent and by call typeWeeklyWhich agents are not resolving issues, and on which call types. Low FCR on a specific call type = flowchart or process gap, not necessarily an agent gap
QA scoresBy agent, by evaluator, by rubric categoryMonthlyWhich agents need coaching and on which specific behaviors. Score by evaluator reveals calibration issues
Hold timeBy agentWeeklyWhether agents are putting customers on hold to search for answers. High hold time across many agents = knowledge or system access problem. High hold time for specific agents = individual training need
Transfer/escalation rateBy agent and by call typeMonthlyWhether agents are escalating calls they should resolve. High escalation rate for a specific call type = agents lack authority or knowledge for that type

How to use it

FindingDecision
5 agents have AHT 40%+ above target on billing callsDo not coach all 5 on "reduce AHT." Listen to their billing calls, identify the specific behavior driving the excess time (searching for billing history, re-reading policies, over-explaining). Coach on the behavior, not the metric
FCR is below 60% for a specific call type across all agentsThe problem is not the agents — it is the process. The resolution path for that call type is incomplete, or agents lack the authority to resolve it
QA scores vary by 15+ points depending on which evaluator scored the callThe QA program has a calibration problem. Evaluators are applying the rubric inconsistently. Calibrate before using QA data for coaching
Agent performance declined over the last 4 weeks (was meeting targets, now below)Something changed. Check whether a process, system, or schedule change coincides with the decline. If not, have a direct conversation

Cost analytics: the data that controls spending

What to track

MetricHow to calculateFrequencyWhat it tells you
Cost per callTotal labor cost / total calls handledMonthlyThe unit economics of the operation. Increasing cost per call means either volume is declining, labor cost is rising, or efficiency is dropping
Overtime as % of labor hoursOvertime hours / total hoursWeeklyWhether the operation is structurally understaffed. Above 5% sustained = hiring is cheaper
Attrition replacement costDepartures × (recruiting + training + ramp productivity loss)MonthlyThe real cost of turnover — typically $5,000–$7,000 per agent for a 3-week training program. 5% monthly attrition in a 100-agent operation = $25,000–$35,000 per month in replacement cost
Cost per call by call typeLabor cost allocated by AHT weight per call typeQuarterlyWhich call types consume disproportionate cost. A call type with 8-minute AHT costs 2x a call type with 4-minute AHT
Back office cost as % of totalBack office labor + overhead / total operational costQuarterlyWhether administrative functions are growing faster than the operation. Rising ratio = manual processes adding cost

How to use it

FindingDecision
Cost per call increased 8% quarter over quarterDecompose: did volume drop (fewer calls to spread fixed costs across), did AHT increase (each call costs more), did overtime rise, or did attrition drive higher training costs? The cause determines the fix
Overtime is 12% of total hoursCalculate the annual overtime premium: overtime hours × 0.5 × hourly rate. Compare to the cost of hiring additional agents. If overtime premium exceeds the fully loaded cost of new hires, hire
Attrition cost exceeds $30,000/monthInvestigate attrition drivers — schedule dissatisfaction, compensation, occupancy-driven burnout, management issues. Reducing attrition by 2 percentage points saves more than most process improvements

The analytics cadence

Analytics is not useful as a one-time exercise. It drives decisions through a regular review cadence.

CadenceWhat to reviewWho reviewsTime requiredDecisions made
DailyService level by interval, agents vs. schedule, queue depth, AHT spikesSupervisor15 minSame-day intraday adjustments
WeeklyForecast accuracy, adherence, AHT by agent, overtime hours, absence rate, FCRSupervisor + WFM30–45 minNext week's schedule adjustments, coaching assignments
MonthlyAttrition, shrinkage actual vs. plan, cost per call, QA score trends, training effectivenessOps manager60 minHiring decisions, process changes, training priorities, budget adjustments
QuarterlyBenchmarking, budget vs. actual, strategic workforce plan, attrition trendsOps manager + leadership90 minHeadcount planning, technology investments, contract negotiations (BPO)

Analytics for BPOs

BPO operations require all of the above analytics segmented by client account. Aggregate metrics across all clients are useful for internal management but meaningless for client reporting and SLA accountability.

Analytics requirementWhy it is different for BPOs
All metrics tracked per clientClient A may be meeting SLA while Client B is missing. Aggregate data hides the problem
Billable utilizationNon-billable time (bench, training, internal meetings) directly affects revenue. Track billable hours as a % of total paid hours per client
Cross-client agent movementWhen cross-trained agents move between accounts during intraday management, track the time per account to ensure accurate client billing and SLA reporting
Client-specific cost per callEach client has different AHT, volume, and complexity. Cost per call must be calculated per client to assess contract profitability
SLA performance trendingTrack SLA metrics by client over time — not just whether the target was met this month, but whether performance is trending toward or away from the target

Common analytics mistakes

Tracking metrics without connecting them to decisions. A dashboard that shows 30 metrics in real time but does not tell anyone what to do when a metric is off is reporting, not analytics. Every metric should have a defined threshold, a responsible person, and a documented response.

Using aggregate averages that hide problems. A daily service level of 80% can mean 80% every interval — or 95% in the morning and 60% in the afternoon. An average AHT of 360 seconds can mean every agent is at 360 — or half are at 280 and half are at 440. Always segment by interval, by agent, and by call type before drawing conclusions.

Measuring too many things. An operation tracking 50 metrics weekly will not act on any of them. Focus the daily review on 5 metrics, the weekly review on 8–10, and the monthly review on 12–15. Everything else is available if needed for diagnosis but is not part of the regular review.

Treating correlation as causation. AHT went down the same month you launched a new training module — but AHT also went down because call mix shifted toward simpler call types. Check for alternative explanations before attributing outcomes to interventions.

Not acting on the data. The most common analytics failure is not a data problem — it is an action problem. The data shows that Tuesday mornings are understaffed, that 5 agents need coaching, that overtime is structural. But nobody changes the forecast, nobody schedules the coaching, nobody approves the hire. Analytics only improves productivity if it drives decisions that someone executes.

Vik Chadha

About the Author

Vik Chadha

Founder of HiveDesk. Has been helping businesses manage remote teams with time tracking and workforce management solutions since 2011.

Try HiveDesk Free for 14 Days

Increase productivity, take screenshots, track time and cost, and bring accountability to your team. $5/user/month, all features included.