Why A/B Test?
Test different variations to find what converts best:
- Which greeting gets more engagement?
- What personality drives more leads?
- Which questions qualify better?
- What colors increase clicks?
Setting Up A/B Tests
Create Test Variations
Method 1: Duplicate & Modify
- Duplicate your existing chatbot
- Name it clearly (e.g., “Homepage Chat – Variation B”)
- Make ONE significant change:
- Different greeting
- New personality style
- Alternative questions
- Different colors
- Keep everything else the same
Method 2: Built-in A/B Testing (Coming Soon)
Auto-split traffic and track results automatically.
What to Test
Test Ideas
1. Greeting Message
Variation A:
"Hi! How can I help you today?"
Variation B:
"👋 Welcome! Looking for something specific?"
What to measure:
- Open rate
- Engagement rate
- Lead conversion
2. Personality Type
Variation A: Professional, formal tone
Variation B: Friendly, casual tone
What to measure:
- Conversation length
- Satisfaction (if using feedback)
- Lead quality
3. Lead Capture Timing
Variation A: Ask for email immediately
Variation B: Build rapport first, ask later
What to measure:
- Lead capture rate
- Conversation abandonment
- Lead quality
4. Widget Colors
Variation A: Blue theme (trust)
Variation B: Green theme (growth)
What to measure:
- Widget open rate
- Engagement duration
- Overall conversions
5. Quick Prompts
Variation A:
- “Learn About Our Services”
- “View Pricing”
- “Contact Sales”
Variation B:
- “How Can We Help You Grow?” 🚀
- “See Our Plans” 💰
- “Let’s Chat” 💬
What to measure:
- Prompt click rate
- Conversation starts
- Lead capture
Running the Test
Split Traffic
Manual method:
Option 1: Different Pages
- Variation A on homepage
- Variation B on pricing page
- Compare results by page
Option 2: Time-Based
- Variation A: Week 1-2
- Variation B: Week 3-4
- Compare same metrics
Option 3: Alternating Days
- Variation A: Mon, Wed, Fri
- Variation B: Tue, Thu, Sat
- Account for day-of-week patterns
Automatic split testing (Coming Soon):
- 50/50 traffic split
- Real-time results
- Statistical significance tracking
Tracking Results
Key Metrics to Track
Conversion Metrics:
- Lead Capture Rate: % who provide contact info
- Qualified Leads: % meeting your criteria
- Appointments Booked: % who schedule calls
- Goal Completions: Custom goals you set
Quality Metrics:
- Conversation Quality: Manual review rating
- Lead Score: Average qualification score
- Follow-up Success: % leading to sales
- Customer Feedback: Thumbs up/down
Analyzing Results
View Analytics
- Go to “All Chat Funnels” page
- Compare metrics
Dashboard shows:
- Total conversations
- Lead capture rate
- Engagement stats
- Traffic sources
- Device breakdown
Compare Variations
Create comparison sheet:
| Metric | Variation A | Variation B | Winner |
|---|---|---|---|
| Conversations Started | 120 | 145 | B (+21%) |
| Leads Captured | 45 | 52 | B (+16%) |
| Qualified Leads | 20 | 28 | B (+40%) |
| Avg. Lead Score | 65 | 72 | B (+11%) |
Variation B wins! 🎉
Statistical Significance
Ensure valid results:
- Run test for at least 2 weeks
- Collect minimum 100 conversations each
- Check consistency across time
- Account for external factors (holidays, campaigns)
Confidence levels:
- 95%+ confidence: Strong winner
- 90-95%: Likely winner
- Below 90%: Inconclusive, run longer
Implementing Winners
Roll Out Winning Variation
- Verify Results:
- Double-check data
- Review conversations
- Confirm lead quality
- Update Main Funnel:
- Apply winning changes to original
- Or swap activation codes
- Archive losing variation
- Monitor Performance:
- Watch metrics after change
- Ensure improvement holds
- Continue optimizing
- Document Learnings:
- Note what worked
- Understand why it won
- Apply insights to other funnels
Advanced Testing Strategies
Multivariate Testing
Test multiple elements at once:
Example:
- 2 greetings × 2 colors × 2 CTA styles = 8 variations
- Requires more traffic
- Identifies optimal combination
Tools needed:
- Tag manager
- Analytics platform
- Statistical significance calculator
Sequential Testing
Continuous optimization:
- Test greeting (winner: B)
- Test colors with B (winner: blue)
- Test questions with B + blue (winner: early ask)
- Test CTAs with B + blue + early ask
- Repeat indefinitely
Result: Compounding improvements over time
Audience Segmentation
Test for specific audiences:
- New vs Returning Visitors
- Mobile vs Desktop
- Traffic Source (Google, Facebook, Direct)
- Geographic Location
Different audiences may prefer different approaches.
Common Testing Mistakes
❌ Don’t:
- Change Too Many Things at Once
- Can’t identify what caused improvement
- Test ONE variable at a time
- End Tests Too Early
- Need statistical significance
- Minimum 100 conversations per variation
- Ignore Context
- Holidays affect behavior
- Marketing campaigns skew results
- Account for external factors
- Test Without Clear Hypothesis
- “I think friendly tone will increase leads by 20%”
- Not “Let’s try different tones and see”
- Forget About Lead Quality
- More leads ≠ better (if they’re unqualified)
- Track lead score and sales outcomes
✅ Do:
- Start Simple
- Test obvious changes first
- Build complexity gradually
- Document Everything
- Record what you tested
- Note the results
- Build knowledge base
- Be Patient
- Good data takes time
- Don’t rush decisions
- Trust the process
- Learn from Losses
- Failed tests teach you
- Understand why things didn’t work
- Apply learnings to future tests
Testing Tools & Resources
Built-in Analytics
Funneler provides:
- Conversation metrics
- Lead capture tracking
- Traffic sources
- Device breakdown
External Tools
Recommended additions:
Google Analytics:
- Track funnel goals
- User flow visualization
- Deeper insights
Hotjar/FullStory:
- Session recordings
- See actual visitor behavior
- Heatmaps
Spreadsheets:
- Track results over time
- Compare variations
- Calculate significance
Testing Calculators
A/B Test Significance Calculator:
- Free online tools
- Enter your numbers
- Get confidence level
- Make data-driven decisions
Next Steps After Testing
Continuous Improvement
Monthly optimization cycle:
Week 1: Plan new test based on data
Week 2-3: Run test, collect data
Week 4: Analyze results, implement winner
Repeat monthly for ongoing improvement.
Share Learnings
Document wins:
- What you tested
- What you learned
- Impact on metrics
- Screenshots/examples
Apply to other funnels:
- Use winning formulas
- Adapt to different contexts
- Scale what works