Test Flows
Test Flows are automated tests that verify critical customer journeys on the storefront — checkout, search, navigation, cart, and more. Tests run on a schedule and alert you when something breaks.
Create a test with AI
The fastest way to build test coverage. Shoptest generates tests from predefined scenarios.
- Go to Test Flows and click Create test.
- Select Generate with AI.
- Choose a scenario:
| Scenario | What it tests | Approx. time |
|---|---|---|
| Checkout Flow | Add to cart → proceed to checkout | ~30s |
| Product Search | Search by name → verify results | ~25s |
| Search — No Results | Search → verify "no results" message | ~20s |
| Mobile Navigation | Hamburger menu → tap a link | ~20s |
| Quantity Selector | Increment quantity → verify change | ~20s |
| Cart Editing | Add, update quantity, remove from cart | ~40s |
| Collection Filters | Apply filter → verify products update | ~30s |
| Quick Add to Cart | Add from collection page | ~25s |
| Image Gallery | Click thumbnails and carousel arrows | ~23s |
- For scenarios that require products, select representative in-stock items only.
- Click Generate. Each successful generation costs 100 credits. Failed generations are free.
Recommended starting set: Checkout Flow + Product Search + Mobile Navigation covers the three most common failure points.
Generate Test Suite
To create multiple tests at once, click Generate Test Suite from the Test Flows page. Shoptest builds a recommended set of tests for the store's critical journeys in one step.
Record a test manually
Use the manual recorder when the store has custom UX patterns or when AI generation is not the right fit.
- Go to Test Flows and click Create test.
- Select Record Manually.
- Configure the test before recording:
- Test Name — use a descriptive name (e.g. Checkout flow — mobile).
- Frequency — how often the test runs (default: every 6 hours).
- Test Mode — Static URL, Batch Mode, or Journey Mode (see below).
- Device Mode — Desktop (1280×720) or Mobile.
- Starting URL — pre-filled with the store's verified domain.
- Click Start Recording. A virtual browser opens on the store.
- Walk through the journey at a natural pace.
- Add validations after every meaningful action (see Validations below).
- Remove any accidental interactions from the step list.
- Save the test.
Important: Confirm Starting URL, Test Mode, and Device Mode before you start recording — these cannot be changed mid-session.
Test modes
| Mode | Use when |
|---|---|
| Static URL | You need a simple page-availability check (page loads without errors) |
| Batch Mode | The same flow should run across multiple products or URLs |
| Journey Mode | The test follows a sequential user flow with multiple steps and transitions |
Validations
Validations are assertions you add during recording to verify that each step produced the correct result. Without validations, a test only confirms pages loaded — not that the right content appeared.
Validation types
| Type | When to use |
|---|---|
| Text present on page | After add-to-cart, checkout steps, and confirmations — enter exact text that should appear (e.g. Added to cart) |
| Element visible | To confirm CTAs, banners, or checkout fields rendered correctly |
| Element not visible | To verify error messages do not show after valid actions, or out-of-stock badges do not appear for in-stock products |
| URL contains / URL equals | At every major page transition: cart → /cart, checkout → /checkout, order → /thank_you |
| Page title assertion | Lightweight check that the correct page loaded |
| Input field value | After quantity increment/decrement to confirm the value changed |
| Element count | After filter application on collection pages to confirm products updated |
Validation placement
- Add a validation after every action whose outcome matters: add to cart, page navigation, form submission, filter application.
- Validate at every checkout transition — cart → checkout → confirmation.
- Minimum for a checkout test: 3 validations — cart reached, checkout reached, order confirmed.
- In Batch Mode, avoid product-specific text — use generic text like Add to cart or Checkout that appears across all products.
- In Journey Mode, validate each step in sequence — do not rely on a single end-state validation.
- Avoid over-validating layout elements (header, footer) — focus on journey-critical content.
Advanced recording interactions
For stores with custom themes or complex checkout flows:
| Interaction | When to use |
|---|---|
| Hover | Navigation that requires mouse-over to reveal dropdowns or mega menus |
| Wait for element | Stores with heavy JavaScript rendering — prevents acting on elements before they load |
| Advanced click (double-click, modifier keys) | Quantity selectors that require double-tap, or elements requiring Shift/Ctrl click |
| Scroll to element | Below-the-fold elements that need to be in viewport before interaction |
| Keyboard input | Type into fields, press Enter after search input (more reliable than clicking the search button) |
| Iframe interaction | Embedded payment widgets (Stripe, PayPal) — standard clicks do not reach inside iframes |
Traffic Simulator
Expand the Traffic Simulator section on the test creation screen to inject context that makes tests more realistic.
| Setting | Use when |
|---|---|
| UTM parameters | Testing campaign landing pages where storefront behaviour changes based on traffic source |
| Cookie injection | Simulating a logged-in or returning customer |
| Referrer | Simulating arrival from a specific source |
| User agent | Overriding the default browser user agent (note: this differs from Device Mode viewport emulation) |
| Query parameters | Testing URL-driven personalisation or A/B test variants |
Leave Traffic Simulator collapsed for standard journey tests where traffic source is not a variable.
Review results and run history
After a test runs:
- Go to Test Flows and click the test.
- Review the latest run:
- Pass/fail status — shown at the top.
- Step-by-step breakdown — each step with its outcome.
- Screenshots — captured at key journey points.
- Run log — detailed execution record.
- Batch results — per-product results for Batch-type tests.
- Scroll to Run history to see all previous runs with dates and outcomes.
Use run history to spot patterns — repeated failures at the same time may indicate a dependency issue. Review history before and after major store changes to identify when a failure began.
When a test fails: See Investigating failures for AI analysis, AutoFix, and Safari checks.
Troubleshooting
Test shows Pending and has never run
- Trigger the test manually to confirm it works. If it fails immediately, check the starting URL and the storefront password in Settings.
Store is password-protected and tests cannot access it
- Enter the storefront password in Settings > Storefront Password. This is the password visitors use to enter the store — not the Shopify admin password.
AI test generation failed
- No credits are charged for failed generations. Check that the store is accessible (not password-blocked) and that the selected product is in stock. Try again or switch to manual recording.
Test passes in Chrome but fails in Safari
- This is a genuine browser-specific issue. Run a Safari compatibility check and review the video and logs to identify the difference.