Skip to main content

Run Tests with Datasets

Make use of parameters set for test case while executing them

Shriti Grover avatar
Written by Shriti Grover
Updated over 2 weeks ago

Run your cases the data-driven way.
When a test case is linked to a dataset, every row becomes its own iteration, so one set of steps covers all your input combinations—no cloning required.

How it works

  • When a test case that’s linked to a dataset is added to a test plan, each row in the dataset becomes a separate iteration of that test case.

  • During execution, the parameter placeholders you added to steps and expected results are automatically replaced with the row’s values, so testers see real data on the screen.


Running the test

  1. Add the test case to a test plan as you normally would.

  2. Start the plan run.

  3. On the test case view pane on run page, you’ll see the test steps listed once for every dataset row.

  4. Open an iteration and work through the steps; the substituted data appears inline.

  5. For each iteration the tester can:

    • Set a status (Pass, Fail, Blocked, etc.).

    • Leave comments or notes.

    • Attach screenshots or other evidence.


What happens next

  • All iterations are recorded individually, letting you track which data combos passed or failed.

  • If you later edit the dataset, existing run results stay unchanged; they store the exact values that were used at the time of execution.

Tip: If you need to retest with an updated dataset, create a new test plan run—this keeps historical results intact while using the latest data.

Did this answer your question?