ATA Robustness Testing with Monitoring — Product Documentation
1) What This Feature Is
Robustness Testing in ATA simulates edge cases, invalid inputs, and unexpected conditions to check how APIs behave. By integrating Monitoring, you can now automatically run these robustness tests on a schedule, ensuring your APIs remain resilient over time.
Instead of running robustness tests only once during QA, ATA monitoring allows you to:
- Detect regressions early.
- Continuously validate error handling.
- Spot data validation gaps introduced after new deployments.
2) How It Works
- Robustness Test Suite: Define APIs and generate robustness test cases (invalid inputs, wrong data types, edge values).
- Attach Monitoring: Create a monitor based on that robustness suite.
- Scheduling & Configurations: Same as regular ATA monitors (frequency, retries, notifications).
- Automated Runs: Robustness test cases run at defined intervals.
- Results & Health: Track pass/fail, logs, and long‑term trends.
3) Creating a Robustness Monitor
Step 1: Create a Robustness Test Suite
- Navigate to API Testing Lab → Robustness Testing.
- Click + New Suite and name it (e.g.,
User Input Edge Cases). - Add AI instructions (e.g., “Test user_id with strings, negative numbers, and special chars”).
- Import APIs (Swagger, Postman, or ATA Test Suite).
- Save and generate robustness test cases.
Step 2: Create a Monitor for the Suite
- In the Robustness Suite Dashboard, click Create Monitor.
- Configure:
- Monitor Name (e.g.,
Prod Robustness Health). - Suite: Choose the robustness suite.
- Environment: Select environment (Dev, QA, Staging, Production).
- Monitor Name (e.g.,
Step 3: Configure Schedule
- Frequency: Every 5 min, hourly, daily, or custom.
- Start Time: Exact time for the first run.
- Days: All days, weekdays, or business hours.
- Timezone: For consistency across regions.
Step 4: Configure Retry on Failure
- Enable Retry: Toggle on.
- Retry Count: 1–3.
- Retry Delay: Short delay (e.g., 30 seconds).
Step 5: Set Notifications
- Email Alerts: To team inbox.
- Webhooks: Slack, Teams, custom alerting.
- Failure Criteria: Trigger on invalid responses, assertion failures, or missing validations.
Save the monitor → it now executes robustness tests automatically.
4) Viewing Results
- Run History: Timeline of monitor runs with pass/fail results.
- Detailed Logs: For each request and validation failure (e.g., “user_id=abc did not return 400 Bad Request”).
- Visual Graphs:
- Pass vs Fail Trend: Stability of robustness tests over time.
- Failure Types: Data type vs invalid values.
- Response Times: Spot degraded performance under invalid load.
5) Tracking Robustness Health
Monitoring extends robustness testing into continuous validation:
- Failure Patterns: Detect repeated mishandling of edge cases.
- Data Validation Checks: Ensure new changes don’t weaken validation.
- Error Stability: Confirm APIs consistently return the right error responses.
- Regression Alerts: Triggered immediately if new deployments allow invalid inputs.
6) Example Workflows
Example 1: Invalid Data Inputs
- Suite includes tests for negative page numbers and invalid dates.
- Monitor runs daily.
- Alerts if API stops rejecting these invalid inputs.
Example 2: Data Type Validation
- Suite checks integer fields with strings and floats.
- Monitor runs every hour.
- Detects if validation rules are bypassed after a schema change.
Example 3: Robustness Regression Guard
- Suite tests for special characters in
titleand nonexistent IDs. - Monitor runs every 5 minutes in production.
- Alerts if APIs start accepting or crashing on invalid values.
7) Best Practices
- Environment-specific monitors: Separate Dev, QA, and Production.
- Small focused suites: Create monitors for login, payments, or search robustness separately.
- Retries enabled: Avoid false alarms from transient errors.
- Integrate alerts with Slack/Teams for immediate visibility.
- Review trends weekly to track input validation consistency.
8) Benefits
- Ensures APIs handle invalid or unexpected inputs continuously.
- Detects regressions in input validation after deployments.
- Improves reliability by validating error responses automatically.
- Helps maintain resilient APIs across environments.
Robustness Testing with Monitoring ensures that APIs not only work in the happy path but remain reliable and predictable under invalid and unexpected conditions.