Analytics & More
Track test performance, pass rates, scheduling, and notifications.
The analytics dashboard gives you visibility into your testing health across all repositories.
Dashboard Stats
The main testing dashboard shows four key metrics:
| Metric | Description |
|---|---|
| Total Tests | Number of tests across all repositories |
| Pass Rate | Percentage of tests passing |
| Hours Saved | Estimated manual testing time saved |
| Flaky Tests | Tests that pass sometimes, fail sometimes |
Repository Health
The health visualization shows each repository as a bar:
my-frontend ████████████░░░░ 75% passing
my-backend ████████████████ 100% passing
my-api ████░░░░░░░░░░░░ 25% passing
- Green = Passing tests
- Red = Failing tests
- Gray = Tests not run
Click any repository to drill down into its test details.
Pass Rate Trends
Track how your pass rate changes over time:
- Daily pass rate percentage
- Week-over-week comparison
- Identify when regressions were introduced
Flaky Test Detection
A test is marked flaky when it:
- Passes on some runs, fails on others
- Has inconsistent results without code changes
Why Flaky Tests Matter
Flaky tests:
- Waste time investigating false failures
- Reduce trust in your test suite
- Slow down development
Fixing Flaky Tests
| Cause | Solution |
|---|---|
| Race conditions | Add explicit waits |
| Shared state | Isolate test data |
| Network timing | Mock external APIs |
| Animation timing | Wait for animations to complete |
| Random data | Use deterministic test data |
Test Duration
See which tests are slow:
- Average duration per test
- Duration trends over time
- Identify tests that have gotten slower
Slow tests increase feedback time and CI costs.
Run History
View aggregated statistics:
- Total runs this week/month
- Runs by trigger type (Manual, Scheduled, PR, Push)
- Runs by platform (Chrome, Firefox, Safari)
Filtering Analytics
Filter analytics by:
- Repository - Focus on one repo
- Time range - Last 7 days, 30 days, 90 days
- Test type - Unit, Integration, E2E, Performance
Exporting Data
Export analytics for reporting:
- Go to Testing > Analytics
- Set your filters
- Click Export
- Download as CSV
Notifications
Get notified when tests complete.
Enable Notifications
Go to Settings > Alerts and configure:
| Notification | When It Fires |
|---|---|
| On Failure | Any test fails |
| On Flaky | A test is marked flaky |
| On Success | All tests pass (optional) |
| Suite Complete | All scheduled tests finish |
Notification Channels
| Channel | Description |
|---|---|
| Send alerts directly to any email address | |
| Slack | Post to a Slack channel via incoming webhook |
| Discord | Post to a Discord channel via webhook |
| Microsoft Teams | Post to a Teams channel via connector webhook |
Setting Up Integrations
- Go to Settings > Alerts
- Click Add Integration and select your platform
- Paste your webhook URL (or enter email address)
- Click Test to verify the connection
You can add multiple integrations. For example, send critical failures to Slack and a summary email to the team lead.
Best Practices
Review flaky tests weekly. A few flaky tests can erode trust in your entire suite.
Watch for pass rate drops after deployments—they indicate regressions.
Keep E2E tests under 30 seconds when possible. Long tests are more likely to be flaky.
Schedule tests during off-peak hours to avoid impacting staging environments.
Don't schedule destructive tests against production if they create test data or modify state.