Skip to main content
User can test agent before deploying to production. Callab provides two testing methods.

Testing Methods

agent edit screen - test call button + test web call button User can access testing from:
  • Agent edit screen (top right)
  • Test Web Call button (recommended)
  • Test Call button (phone call)
Web call testing is fastest way to test agent. No phone number required.

How to Test via Web Call

User can start web call test:
  1. Click “Test Web Call” button (top right of agent screen)
  2. Web call modal opens
agent edit screen - test web call modal
  1. Click “Start Call” button
  2. Allow microphone access in browser
  3. Speak with agent
  4. Test conversation flows
  5. Click “End Call” when done

Web Call Benefits

User gets:
  • Instant testing (no dialing required)
  • Browser-based (works on any device)
  • Free testing (no phone costs)
  • Quick iterations (test → edit → test)
  • Same experience as phone call
  • Full feature testing (tools, transfers, etc.)

Web Call Best Practices

User should:
  • Test in quiet environment first
  • Use good quality microphone
  • Speak clearly and naturally
  • Test different scenarios
  • Test edge cases and errors
  • Test all conversation paths
Side scenarios:
  • Test background noise handling
  • Test interruptions
  • Test unclear speech
  • Test silence handling
  • Test multiple languages (if multilingual)

Method 2: Phone Call Testing

User can test with actual phone call: agent edit screen - test call over phone

How to Test via Phone Call

User can test over phone:
  1. Click “Test Call” button
  2. Modal shows phone number to call
  3. Enter your phone number (optional: agent calls you)
  4. Call the displayed number
  5. Test agent conversation
  6. Hang up when done

Phone Call Benefits

User gets:
  • Real phone network testing
  • Actual call quality testing
  • Network latency testing
  • Caller ID testing
  • Phone-specific features testing

Phone Call Use Cases

User should test via phone when:
  • Testing phone number integrations
  • Testing call quality on real network
  • Testing with team members remotely
  • Demonstrating to stakeholders
  • Testing caller ID display
  • Testing international calls

What to Test

Core Functionality

User must test: Identity & Personality:
  • Agent introduces itself correctly
  • Tone matches configuration
  • Personality is consistent
  • Brand voice is maintained
Task Execution:
  • Agent completes primary tasks
  • Information collection works
  • Required data is gathered
  • Workflows follow correct steps
Guardrails:
  • Agent refuses prohibited topics
  • Escalation triggers work
  • Boundaries are respected
  • Compliance rules followed

Tool Testing

User must test each enabled tool: End Call:
  • Triggers on “goodbye”
  • Asks if anything else needed
  • Ends call properly
Transfer Call:
  • Transfers when requested
  • Explains transfer reason
  • Connects to correct destination
Send Email:
  • Collects email address
  • Confirms before sending
  • Email actually sent
Cal.com Booking:
  • Checks availability
  • Presents options
  • Confirms booking
  • Sends confirmation
Webhooks:
  • Data sent correctly
  • Handles responses
  • Errors handled gracefully
Knowledge Base:
  • Retrieves correct information
  • Presents naturally
  • Handles missing info

Conversation Paths

User should test: Happy Path:
  • User knows what they want
  • All information provided clearly
  • Task completes successfully
Unclear Input:
  • User mumbles or speaks unclearly
  • Background noise present
  • Multiple intents in one sentence
Edge Cases:
  • User provides wrong information
  • User changes mind mid-conversation
  • User asks off-topic questions
  • User remains silent
  • User interrupts frequently
Error Handling:
  • Tools fail or timeout
  • Missing required information
  • System errors occur
  • User gets frustrated

Voice & Language

User must test: Voice Quality:
  • Pronunciation is clear
  • Pace is appropriate
  • Tone sounds natural
  • No robotic artifacts
Language Accuracy:
  • Correct language detected
  • Accents understood
  • Regional variations handled
  • Multilingual if configured

Testing Checklist

User can use this checklist: Pre-Test:
  • Agent configuration saved
  • All tools configured
  • Identity, Tasks, Guardrails set
  • Voice and language selected
During Test:
  • Agent greets appropriately
  • Understands user intent
  • Follows conversation flow
  • Uses tools correctly
  • Handles errors gracefully
  • Maintains personality
  • Respects guardrails
Post-Test:
  • Review call transcript
  • Check tool execution logs
  • Verify data collected
  • Note improvements needed
  • Test again after changes

Common Testing Mistakes

User must avoid:
  • Testing only happy path
  • Not testing all tools
  • Skipping edge cases
  • Testing in perfect conditions only
  • Not documenting issues
  • Testing alone (get diverse testers)
User must do:
  • Test multiple scenarios
  • Test all conversation paths
  • Test error conditions
  • Test with different voices/accents
  • Document all findings
  • Iterate based on results

After Testing

User can:
  1. Review test results
  2. Identify improvement areas
  3. Update agent configuration
  4. Re-test changes
  5. Run simulation tests for comprehensive testing
  6. Deploy to production when satisfied
For comprehensive testing with multiple scenarios, use Simulation Tests to run automated test suites.

Testing Workflow

Initial Testing:
  1. Configure agent
  2. Run web call test
  3. Test basic conversation
  4. Fix obvious issues
  5. Test again
Comprehensive Testing:
  1. Test all features
  2. Test all tools
  3. Test edge cases
  4. Get feedback from team
  5. Run simulation tests
Pre-Production:
  1. Final web call test
  2. Phone call test
  3. Multi-user testing
  4. Stakeholder demo
  5. Production deployment

Next Steps

User can:
Test early and test often. Small issues are easier to fix than large ones discovered in production.