Skip to main content

Test Flows

Test Flows are automated tests that verify critical customer journeys on the storefront — checkout, search, navigation, cart, and more. Tests run on a schedule and alert you when something breaks.


Create a test with AI

The fastest way to build test coverage. Shoptest generates tests from predefined scenarios.

  1. Go to Test Flows and click Create test.
  2. Select Generate with AI.
  3. Choose a scenario:
ScenarioWhat it testsApprox. time
Checkout FlowAdd to cart → proceed to checkout~30s
Product SearchSearch by name → verify results~25s
Search — No ResultsSearch → verify "no results" message~20s
Mobile NavigationHamburger menu → tap a link~20s
Quantity SelectorIncrement quantity → verify change~20s
Cart EditingAdd, update quantity, remove from cart~40s
Collection FiltersApply filter → verify products update~30s
Quick Add to CartAdd from collection page~25s
Image GalleryClick thumbnails and carousel arrows~23s
  1. For scenarios that require products, select representative in-stock items only.
  2. Click Generate. Each successful generation costs 100 credits. Failed generations are free.

Recommended starting set: Checkout Flow + Product Search + Mobile Navigation covers the three most common failure points.

Generate Test Suite

To create multiple tests at once, click Generate Test Suite from the Test Flows page. Shoptest builds a recommended set of tests for the store's critical journeys in one step.


Record a test manually

Use the manual recorder when the store has custom UX patterns or when AI generation is not the right fit.

  1. Go to Test Flows and click Create test.
  2. Select Record Manually.
  3. Configure the test before recording:
    • Test Name — use a descriptive name (e.g. Checkout flow — mobile).
    • Frequency — how often the test runs (default: every 6 hours).
    • Test Mode — Static URL, Batch Mode, or Journey Mode (see below).
    • Device Mode — Desktop (1280×720) or Mobile.
    • Starting URL — pre-filled with the store's verified domain.
  4. Click Start Recording. A virtual browser opens on the store.
  5. Walk through the journey at a natural pace.
  6. Add validations after every meaningful action (see Validations below).
  7. Remove any accidental interactions from the step list.
  8. Save the test.

Important: Confirm Starting URL, Test Mode, and Device Mode before you start recording — these cannot be changed mid-session.


Test modes

ModeUse when
Static URLYou need a simple page-availability check (page loads without errors)
Batch ModeThe same flow should run across multiple products or URLs
Journey ModeThe test follows a sequential user flow with multiple steps and transitions

Validations

Validations are assertions you add during recording to verify that each step produced the correct result. Without validations, a test only confirms pages loaded — not that the right content appeared.

Validation types

TypeWhen to use
Text present on pageAfter add-to-cart, checkout steps, and confirmations — enter exact text that should appear (e.g. Added to cart)
Element visibleTo confirm CTAs, banners, or checkout fields rendered correctly
Element not visibleTo verify error messages do not show after valid actions, or out-of-stock badges do not appear for in-stock products
URL contains / URL equalsAt every major page transition: cart → /cart, checkout → /checkout, order → /thank_you
Page title assertionLightweight check that the correct page loaded
Input field valueAfter quantity increment/decrement to confirm the value changed
Element countAfter filter application on collection pages to confirm products updated

Validation placement

  • Add a validation after every action whose outcome matters: add to cart, page navigation, form submission, filter application.
  • Validate at every checkout transition — cart → checkout → confirmation.
  • Minimum for a checkout test: 3 validations — cart reached, checkout reached, order confirmed.
  • In Batch Mode, avoid product-specific text — use generic text like Add to cart or Checkout that appears across all products.
  • In Journey Mode, validate each step in sequence — do not rely on a single end-state validation.
  • Avoid over-validating layout elements (header, footer) — focus on journey-critical content.

Advanced recording interactions

For stores with custom themes or complex checkout flows:

InteractionWhen to use
HoverNavigation that requires mouse-over to reveal dropdowns or mega menus
Wait for elementStores with heavy JavaScript rendering — prevents acting on elements before they load
Advanced click (double-click, modifier keys)Quantity selectors that require double-tap, or elements requiring Shift/Ctrl click
Scroll to elementBelow-the-fold elements that need to be in viewport before interaction
Keyboard inputType into fields, press Enter after search input (more reliable than clicking the search button)
Iframe interactionEmbedded payment widgets (Stripe, PayPal) — standard clicks do not reach inside iframes

Traffic Simulator

Expand the Traffic Simulator section on the test creation screen to inject context that makes tests more realistic.

SettingUse when
UTM parametersTesting campaign landing pages where storefront behaviour changes based on traffic source
Cookie injectionSimulating a logged-in or returning customer
ReferrerSimulating arrival from a specific source
User agentOverriding the default browser user agent (note: this differs from Device Mode viewport emulation)
Query parametersTesting URL-driven personalisation or A/B test variants

Leave Traffic Simulator collapsed for standard journey tests where traffic source is not a variable.


Review results and run history

After a test runs:

  1. Go to Test Flows and click the test.
  2. Review the latest run:
    • Pass/fail status — shown at the top.
    • Step-by-step breakdown — each step with its outcome.
    • Screenshots — captured at key journey points.
    • Run log — detailed execution record.
    • Batch results — per-product results for Batch-type tests.
  3. Scroll to Run history to see all previous runs with dates and outcomes.

Use run history to spot patterns — repeated failures at the same time may indicate a dependency issue. Review history before and after major store changes to identify when a failure began.

When a test fails: See Investigating failures for AI analysis, AutoFix, and Safari checks.


Troubleshooting

Test shows Pending and has never run

  • Trigger the test manually to confirm it works. If it fails immediately, check the starting URL and the storefront password in Settings.

Store is password-protected and tests cannot access it

  • Enter the storefront password in Settings > Storefront Password. This is the password visitors use to enter the store — not the Shopify admin password.

AI test generation failed

  • No credits are charged for failed generations. Check that the store is accessible (not password-blocked) and that the selected product is in stock. Try again or switch to manual recording.

Test passes in Chrome but fails in Safari

  • This is a genuine browser-specific issue. Run a Safari compatibility check and review the video and logs to identify the difference.