Every workforce decision in a call center traces back to a data input. The staffing calculation uses shrinkage, AHT, and volume data. The schedule uses the staffing output. The budget uses headcount and rate data. Coaching decisions use QA and performance data. When the input data is wrong, the decision is wrong — and the cost is not abstract. It is a specific dollar amount tied to a specific operational failure.
The challenge is that bad data in workforce analytics rarely announces itself. The number looks reasonable, the report runs without errors, and the decision gets made. The cost shows up weeks or months later as a missed SLA, a budget overrun, or a staffing gap — and by that point, the connection to the original data error is not obvious.
The data errors and what they cost
Each data error below is something that actually happens in call center operations. For each one: what the error is, what bad decision it causes, and the financial impact.
Error 1: Shrinkage assumption is wrong
| Detail | Description |
|---|
| What goes wrong | The staffing model assumes 25% shrinkage. Actual shrinkage is 32%. The assumption was set during implementation and never updated |
| Bad decision it causes | Every shift is understaffed by the difference between planned and actual shrinkage. If the operation needs 30 agents on phones and plans for 25% shrinkage, it schedules 40 agents. But at 32% actual shrinkage, only 27 of those 40 are on phones — 3 short every interval |
| Financial cost | 3 missing agents × 8 hours × $15/hr × 1.5 (overtime rate to cover) × 5 days = $2,700/week in overtime. Over a quarter: $35,100. Plus the service level degradation from intervals where coverage gaps were not filled |
| How to detect | Calculate actual shrinkage from time tracking data every 4 weeks. Compare to the assumption in the staffing model. If actual exceeds planned by more than 3 percentage points, update the model |
Why this error persists: Shrinkage changes slowly. New agents get added to training rotations. Team meetings increase. Absence patterns shift seasonally. Each change adds 0.5–1% to shrinkage, and none of them individually seems worth updating the model for. After 12 months of incremental drift, the assumption is 5–7 points off.
Error 2: AHT baseline does not reflect the current call mix
| Detail | Description |
|---|
| What goes wrong | The AHT used in forecasting is the blended average across all call types. But the call mix has shifted — billing inquiries (AHT: 4 minutes) have decreased and technical support calls (AHT: 9 minutes) have increased. The blended AHT used in the model is 5.5 minutes. The actual blended AHT is now 6.8 minutes |
| Bad decision it causes | The forecast calculates that 50 agents can handle the projected volume at 5.5 minutes AHT. At 6.8 minutes, those 50 agents can handle 19% fewer calls. The operation is short-staffed by the equivalent of 10 agents |
| Financial cost | Understaffing by 10 agents means either overtime (10 × 8 hrs × $7.50 OT premium = $600/day) or service level misses. At an 80/20 SLA target, a 10-agent shortage drops service level to approximately 60/20 during peak intervals — which may trigger SLA penalties in a BPO |
| How to detect | Track AHT by call type monthly. Compare the blended AHT used in the forecast model to the actual blended AHT from the previous 4 weeks. If they diverge by more than 10%, update the model |
Error 3: Attrition is measured but not segmented
| Detail | Description |
|---|
| What goes wrong | The operation reports overall attrition at 40% annually. But the attrition is not segmented by tenure. In reality, attrition among agents with fewer than 90 days is 65%, and attrition among agents with 90+ days is 18%. The hiring plan is built on the blended 40% number |
| Bad decision it causes | The hiring plan assumes each new hire has a 60% chance of staying a year. But new hires actually have only a 35% chance of surviving the first 90 days. The pipeline needs to be much larger than the blended rate suggests — and the training investment in agents who leave within 90 days is wasted |
| Financial cost | If 20 agents are hired per quarter and 13 leave within 90 days (65% early attrition), the wasted training cost is 13 × $4,000 = $52,000/quarter. The blended attrition rate masks this — it suggests only 8 of the 20 would leave (40%), underestimating the waste by $20,000/quarter |
| How to detect | Segment attrition by tenure band: 0–30 days, 31–90 days, 91–180 days, 180+ days. If early attrition (0–90 days) is more than double the tenured attrition rate (180+ days), the blended number is misleading and should not be used for hiring or budget decisions |
Error 4: Forecast uses the wrong historical period
| Detail | Description |
|---|
| What goes wrong | The volume forecast is built from the same weeks last year. But last year's data included an anomaly — a system outage generated a surge of inbound calls, or a marketing campaign drove a one-time spike. The forecast treats the anomaly as a normal pattern |
| Bad decision it causes | Overstaffing for weeks that are forecast too high (based on the anomaly), understaffing for weeks where the anomaly volume did not occur. Either way, the schedule does not match reality |
| Financial cost | Overstaffing by 5 agents for 2 weeks: 5 × 40 hrs × $15/hr = $3,000/week × 2 = $6,000 in paid idle time. Understaffing the following week by 5 agents: overtime or SLA miss. A single bad forecast cycle can cost $10,000–$15,000 in a 100-agent operation |
| How to detect | Before using historical data in the forecast, review the source period for anomalies. Flag any week where volume exceeded or fell below the 8-week rolling average by more than 15%. Exclude or normalize those weeks |
Error 5: Timesheet data has systematic errors
| Detail | Description |
|---|
| What goes wrong | Timesheets have missed clock-ins that default to full-shift hours, auto-deducted lunch breaks that agents worked through, or overtime hours not flagged correctly. These errors are small per occurrence but systematic — they happen every pay period |
| Bad decision it causes | Payroll overpays or underpays. Labor cost reports are wrong. Cost per call is calculated from incorrect labor hours. Overtime budget appears on track when it is actually over |
| Financial cost | In a 100-agent operation, if 10% of timesheets have errors averaging 30 minutes per pay period: 10 agents × 0.5 hrs × 26 pay periods × $15/hr = $1,950/year in overpayment alone. The compliance risk is larger — FLSA violations from incorrect overtime calculation can result in back-pay liability for every affected agent |
| How to detect | Daily exception review: compare timesheet clock-in/out times to ACD login/logout times. Any discrepancy greater than 15 minutes should be investigated same-day, not at end of pay period |
Error 6: QA scores do not reflect actual quality
| Detail | Description |
|---|
| What goes wrong | QA evaluations are not calibrated across evaluators. Evaluator A scores the same call at 92%. Evaluator B scores it at 78%. The QA data shows agent performance differences that are actually evaluator differences |
| Bad decision it causes | Agents assigned to the lenient evaluator appear to be top performers. Agents assigned to the strict evaluator appear to need coaching. Performance reviews and coaching sessions are based on evaluator variance, not agent performance |
| Financial cost | Coaching time misdirected: if a supervisor spends 2 hours/week coaching agents who are actually performing well (but were scored low by a strict evaluator), that is 104 hours/year of supervisor time wasted — approximately $3,100 at $30/hour. Meanwhile, agents who actually need coaching (but were scored high by a lenient evaluator) do not improve, and their quality issues reach customers |
| How to detect | Monthly calibration: have all evaluators score the same 5 calls independently. If scores diverge by more than 5 points on the same call, the rubric is being applied inconsistently. Calibrate before the next evaluation cycle |
Error 7: Schedule adherence is measured but breaks are not categorized
| Detail | Description |
|---|
| What goes wrong | The adherence metric captures that agents are off-phone, but does not distinguish between scheduled breaks, unscheduled breaks, coaching sessions, training, and after-call work. All off-phone time looks the same in the data |
| Bad decision it causes | An agent who takes 45 minutes of unscheduled breaks per shift appears identical to an agent who was pulled for 45 minutes of coaching. The first is an adherence problem. The second is planned shrinkage. If the operation treats both the same way, it either under-counts shrinkage or unfairly penalizes coached agents |
| Financial cost | If unscheduled break time is not separated from planned shrinkage, the shrinkage calculation is contaminated (see Error 1). Additionally, agents who receive coaching may resist it if they know it counts against their adherence score — reducing the effectiveness of the coaching program |
| How to detect | Ensure the WFM system or time tracking tool categorizes off-phone time by type: scheduled break, unscheduled break, coaching, training, meeting, administrative. If all off-phone time is in one bucket, the adherence and shrinkage data are both unreliable |
BPO-specific data errors
BPO operations have additional data error categories because they manage multiple clients, contracts, and billing models.
Error 8: Client hour allocation is wrong
| Detail | Description |
|---|
| What goes wrong | A cross-trained agent works 4 hours on Client A and 4 hours on Client B. The timesheet records all 8 hours under Client A because the agent forgot to switch the allocation, or the system defaults to the primary account |
| Bad decision it causes | Client A's cost per call and billable utilization are overstated. Client B's are understated. If the BPO uses this data for pricing decisions or contract renewals, it may underprice Client A (thinking it is cheaper to serve than it is) and overprice Client B |
| Financial cost | If 10 cross-trained agents each misallocate 1 hour per day: 10 hours/day × 250 working days = 2,500 hours/year. At a billing rate of $25/hour, that is $62,500 in misallocated revenue — not lost, but assigned to the wrong client. If this leads to a pricing decision that undercharges Client A by $2/hour on a renewal, the annualized loss on that contract could be significant |
| How to detect | Compare timesheet client allocation to ACD skill group assignment. If the ACD shows the agent handled Client B calls during a period but the timesheet shows Client A, the allocation is wrong. Use ACD-based allocation as the primary method, not manual agent logging |
Error 9: SLA measurement uses the wrong denominator
| Detail | Description |
|---|
| What goes wrong | The SLA is defined as "80% of calls answered within 20 seconds." The BPO measures this at the daily level. Some days hit 85%, some hit 72%. The daily average is 80%, so the BPO reports the SLA as met. But the contract defines SLA at the interval level (30-minute intervals), and 35% of intervals missed the target — the daily average masks interval-level failures |
| Bad decision it causes | The BPO believes it is meeting the SLA and does not take corrective action. The client, measuring at the interval level per the contract, calculates an SLA miss and applies penalties. The BPO is surprised by the penalty because its own data showed compliance |
| Financial cost | SLA penalties are typically 2–5% of the monthly invoice per missed metric. On a $200,000/month account, a 3% penalty is $6,000/month. If the BPO could have avoided the miss by correctly measuring and reacting at the interval level, the penalty is a direct cost of the measurement error |
| How to detect | Confirm that the SLA measurement methodology (denominator, time interval, exclusions) matches the contract definition exactly. Measure at the most granular level the contract specifies. If the contract says "interval level," do not report at the daily level |
How data errors compound
Data errors rarely exist in isolation. One wrong input feeds into multiple calculations, and each downstream decision magnifies the original error.
| Starting error | First-order impact | Second-order impact | Third-order impact |
|---|
| Shrinkage 7 points too low | Understaffed by 4 agents per shift | Service level drops from 80/20 to 65/20 | SLA penalty + overtime to recover + agent burnout from sustained overload |
| AHT baseline 20% too low | Forecast under-predicts agent need by 20% | Schedule has 20% fewer agents than needed | Mandatory overtime, attrition increase from overwork, further understaffing |
| Attrition not segmented by tenure | Hiring pipeline sized to blended rate (too small) | Headcount falls further behind each month | Chronic understaffing funded by overtime at 1.5x cost |
| Client hours misallocated | Client profitability reports are wrong | Pricing decisions based on wrong cost data | Contract renewed at a rate that does not cover actual cost |
The cost of delayed correction
Every data error has a correction cost that increases with time.
| When caught | Correction cost | Example |
|---|
| Same day | Minutes of supervisor time. Zero financial impact | Missed timesheet punch corrected same day — actual time is still fresh in memory |
| End of pay period | Hours of reconciliation. Possible payroll adjustment | Missed punches across 10 agents over 2 weeks — supervisor must investigate each one, memories are fuzzy, some corrections are estimates |
| End of month | Wrong data has already fed into reports. Reports must be re-run. Decisions made on wrong data may need to be reversed | Shrinkage assumption used in staffing model all month — schedule was wrong for 4 weeks, overtime was incurred, and the budget report shows the wrong labor cost |
| End of quarter | Multiple decisions built on wrong data. Re-running the numbers changes historical comparisons. Stakeholders have been acting on wrong information | BPO client profitability report showed Account B was profitable at 12% margin. After correcting hour allocation, actual margin is 4%. The pricing strategy for the renewal was based on the wrong number |
Building a data quality cadence
Data quality is not a one-time project. It is a recurring check built into the operations management cadence.
| Check | Frequency | What to verify | Who |
|---|
| Timesheet exceptions | Daily | Missed punches, clock-in vs. ACD login discrepancies, unscheduled overtime | Supervisor |
| Adherence categorization | Daily | Off-phone time categorized correctly (break, coaching, training, admin) | Supervisor or WFM analyst |
| Forecast accuracy | Weekly | Forecasted volume vs. actual volume, by day and interval. Forecasted AHT vs. actual AHT | WFM analyst |
| Shrinkage actual vs. planned | Monthly | Calculate actual shrinkage from time tracking data. Compare to the assumption in the staffing model | WFM analyst or ops manager |
| QA calibration | Monthly | All evaluators score the same 5 calls. Scores within 5 points of each other | QA manager |
| Attrition segmentation | Monthly | Attrition by tenure band (0–30, 31–90, 91–180, 180+ days). Compare to blended rate used in hiring plan | Ops manager or HR |
| AHT by call type | Monthly | AHT per call type compared to the baseline used in the forecast model | WFM analyst |
| Client hour allocation (BPO) | Every pay period | Timesheet client allocation vs. ACD skill group data for cross-trained agents | Supervisor + BPO ops manager |
| SLA measurement methodology (BPO) | Quarterly | Confirm internal SLA calculation matches contract definition (interval, denominator, exclusions) | BPO ops manager |
What to fix first
Not all data errors are equally costly. Prioritize based on financial impact and how many downstream decisions each input affects.
| Priority | Data input | Why it is high priority |
|---|
| 1 | Shrinkage assumption | Affects every staffing calculation for every shift. A wrong shrinkage number means every schedule is wrong |
| 2 | AHT baseline in the forecast model | Affects the volume-to-agent conversion. Wrong AHT means the forecast produces the wrong headcount number |
| 3 | Timesheet accuracy | Affects payroll, labor cost, shrinkage calculation, overtime tracking, and compliance. The most connected data point in the operation |
| 4 | Attrition segmentation by tenure | Affects hiring pipeline sizing and training investment decisions. Blended attrition hides where the problem actually is |
| 5 | QA calibration | Affects coaching direction, performance reviews, and agent morale. Uncalibrated QA data sends supervisors after the wrong problems |
| 6 (BPO) | Client hour allocation | Affects billing accuracy, client profitability, and pricing decisions. Wrong allocation means the BPO does not know which accounts are actually making money |