Skip to main content
The Satisfaction & Outcome tab shows how well your voice agent is meeting customer needs. Use it to understand satisfaction trends, resolution effectiveness, and where to focus improvements. Satisfaction & Outcome Dashboard

Accessing Satisfaction Metrics

  1. Navigate to Dashboard from the main menu
  2. Click the Satisfaction & Outcome tab
  3. Apply filters to analyze specific segments:
    • Agent Filter: Compare satisfaction across agents
    • Date Range: Track trends over time

Key Metrics

CSAT Score

What it measures: Customer Satisfaction Score — the average satisfaction rating from post-call surveys, displayed as a percentage. As a reference, here are typical ranges:
  • > 85%: Excellent
  • 75% - 85%: Good
  • 65% - 75%: Room for improvement
  • < 65%: Investigate

NPS (Net Promoter Score)

What it measures: How likely customers are to recommend your service, on a scale of 0-10. How it’s calculated:
Promoters (9-10) - Detractors (0-6) = NPS (-100 to +100)
Passives (7-8) are not counted.
As a reference, here are typical ranges:
  • > 50: Excellent
  • 30 - 50: Good
  • 0 - 30: Acceptable
  • < 0: More detractors than promoters — investigate

First-Call Resolution (FCR)

What it measures: Percentage of calls resolved without escalation, callback, or follow-up.
FCR = (Calls Resolved on First Contact / Total Calls) × 100
As a reference, here are typical ranges:
  • > 80%: Excellent
  • 70% - 80%: Good
  • 60% - 70%: Acceptable
  • < 60%: Investigate
FCR and Escalation Rate are inversely related. High FCR means the agent is handling most queries independently.

Escalation Rate

What it measures: Percentage of calls that required transfer to a human agent.
Escalation Rate = (Escalated Calls / Total Calls) × 100
As a reference, here are typical ranges:
  • < 10%: Excellent — agent handles most queries
  • 10% - 20%: Normal for customer support use cases
  • 20% - 30%: Acceptable for complex domains
  • > 30%: High — agent may need expanded capabilities
The goal isn’t zero escalations. Some queries should go to humans. Focus on reducing unnecessary escalations while ensuring smooth handoffs when needed.

Sentiment Distribution

What it measures: The emotional tone of customer conversations, analyzed from transcripts. Displayed as: A donut chart with three segments:
  • Positive (score 0.6 - 1.0): Satisfied, grateful
  • Neutral (score 0.4 - 0.6): Matter-of-fact, transactional
  • Negative (score 0.0 - 0.4): Frustrated, upset
A healthy distribution typically looks like:
  • Positive: 60-70%
  • Neutral: 20-30%
  • Negative: 5-15%
Sentiment analysis isn’t perfect. Sarcasm, cultural differences, and context can cause misclassification. Use it as a directional indicator, not an exact measurement.

Reading the Sentiment Chart

The donut chart uses color-coded segments:
  • Yellow-Orange: Positive sentiment
  • Blue: Neutral sentiment
  • Gray: Negative sentiment
Segment size is proportional to the percentage of calls. Hover over segments to see exact percentages. What to watch for:
  • A growing red segment over time suggests declining experience
  • A dominant gray segment may indicate the agent is functional but not engaging
  • A sudden shift in any direction warrants investigation

Improving Your Metrics

These metrics reflect how well your agent serves customers. Here’s what you can adjust:

Improve Resolution and Reduce Escalations

  • Expand your knowledge base: If the agent can’t answer common questions, add that information
  • Add tool integrations: Give the agent access to systems it needs (order lookup, account info, etc.)
  • Refine escalation triggers: Configure when the agent should hand off vs. attempt to resolve
  • Update conversation flows: If customers frequently get stuck at specific points, improve the flow

Improve Satisfaction and Sentiment

  • Improve prompt instructions: Guide the agent’s tone, empathy, and thoroughness
  • Keep responses concise: Long-winded answers frustrate callers
  • Choose the right model: More capable models (GPT-4.1, Gemini 2.5-Pro) handle nuanced conversations better
  • Set clear expectations: If the agent can’t do something, it should say so early rather than frustrating the caller

Compare Agents to Find What Works

Use the agent filter to view metrics for individual agents. If one agent performs significantly better than another:
  1. Review its configuration (prompt, model, knowledge base)
  2. Identify what’s different from lower-performing agents
  3. Apply those patterns where appropriate
The agent filter is single-select — switch between agents to compare their metrics. When comparing, account for differences in complexity. A simple FAQ agent will naturally have higher FCR and CSAT than one handling complex support queries.

By Date Range

Use date range comparisons to spot trends:
  • Today vs Yesterday: Detect sudden changes
  • This Week vs Last Week: Identify trends
  • Custom range: Investigate specific incidents or measure the impact of changes you’ve made
A declining trend across metrics often points to a specific change — a new prompt, a model switch, or a knowledge base update that introduced issues.

By Agent

Filter by specific agents to understand per-agent performance. This is particularly useful after making configuration changes to verify they had the intended effect.

Troubleshooting

Metrics Not Loading

  1. Expand the date range — there may not be enough data in the selected period
  2. Select “All agents” to check if data exists for any agent
  3. Verify calls have been made in the selected period
  4. If the issue persists, contact support

Conflicting Metrics

Sometimes metrics seem contradictory (e.g., high FCR but low CSAT). Common explanations:
  • High FCR, Low CSAT: The agent may be marking calls as resolved without truly satisfying the customer. Review the agent’s conversation flow.
  • Low Escalation, High Negative Sentiment: The agent may not be escalating when it should. Consider adjusting escalation triggers for frustrated callers.
  • Good Sentiment, Low FCR: The agent may be pleasant but unable to solve problems. Expand its knowledge base or tool access.

API Reference

Get Satisfaction Analytics

Retrieve satisfaction scores, sentiment distribution, and outcome metrics programmatically.