DEV Community

Cover image for 5 Ways to Handle Test Data in Playwright | TestDino Insights
TestDino
TestDino

Posted on

5 Ways to Handle Test Data in Playwright | TestDino Insights

You create a test that adds a new roster. It passes. You run it again. It fails.

"A roster with that name already exists."

Sound familiar?

This is the most common question new test automation engineers ask, and it reveals a deeper problem: test data management.

When you have 10 tests, you can manually clean up data between runs. When you have 1000 tests running in parallel across multiple environments, that approach falls apart fast.

The real question isn't "How do I clean data?" It's "Which data strategy prevents my test suite from becoming unmaintainable?"

Here are five patterns that actually scale, from simplest to most robust.


Pattern 1: Explicit Cleanup in afterEach/ afterAll

The most straightforward approach: delete what you create.

typescripttest.afterEach(async ({ request }) => {
  await request.delete('/api/rosters/test-roster-name');
});
Enter fullscreen mode Exit fullscreen mode

When this works

  • Small suites (under 50 tests)
  • Single-threaded execution
  • Stable test environments

When this breaks

Parallel execution. If two workers create the same roster simultaneously, one will fail before cleanup runs.

Also, if your test crashes mid-execution, cleanup might never happen.

Pro tip: Always check if the resource exists before trying to delete it.


Pattern 2: Unique Test Data Per Run (The UUID Approach)

Instead of fighting over the same data, generate unique identifiers for every test execution.

typescript

import { randomUUID } from 'crypto';

test('create roster', async ({ page }) => {
  const rosterName = `roster-${randomUUID()}`;
  await page.fill('[name="roster"]', rosterName);
  await page.click('button[type="submit"]');
});
Enter fullscreen mode Exit fullscreen mode

When this works

Always. This is the most reliable pattern for parallel execution and CI/CD pipelines.

When this breaks

  • Testing specific edge cases (like "roster name already exists")
  • Database starts bloating with test data

The tradeoff

You'll need a separate cleanup job to purge old test data periodically. But you gain total isolation, which means maximum parallelism.


Pattern 3: API-Based Teardown (Even for UI Tests)

Your test exercises the UI, but cleanup happens via API. This is faster and more reliable than clicking through UI flows.

test('roster workflow', async ({ page, request }) => {
  const rosterName = 'test-roster';

  // UI test flow
  await page.fill('[name="roster"]', rosterName);
  await page.click('button[type="submit"]');

  // API cleanup (the smart part)
  await request.delete(`/api/rosters/${rosterName}`);
});
Enter fullscreen mode Exit fullscreen mode

When this works

When you have API access and can authenticate programmatically.

When this breaks

When API and UI behavior diverge (the API might allow deleting something the UI protects).

Best practice: Keep UI tests focused on UI behavior. Use API calls for setup and teardown only.


Pattern 4: Database Transactions (The Hacky Solution)

Wrap your test in a database transaction and roll it back after completion. No data persists.

test.beforeAll(async ({ db }) => {
  await db.query('BEGIN TRANSACTION');
});

test.afterAll(async ({ db }) => {
  await db.query('ROLLBACK');
});
Enter fullscreen mode Exit fullscreen mode

When this works

Local development with direct database access.

When this breaks

Most real scenarios.Transactions:

  • Lock tables
  • Break async workflows
  • Don't work with microservices
  • Don't test your actual delete logic

Use case: Isolated unit tests for data access layers, not full integration tests.


Pattern 5: Environment Isolation with Seeded Data

Create dedicated test environments with pre-seeded data pools. Each environment resets to a known state before test runs.

Tests use predefined data pools (test-user-1, test-user-2) rather than creating new data.

When this works

Large organizations with dedicated QA infrastructure and strict regulatory requirements.

When this breaks

Small teams without DevOps resources. High maintenance burden.

Reality check: Most teams start here, realize the infrastructure cost, and migrate to Pattern 2.


The Question These Patterns Can't Answer

Here's what none of these patterns solve: visibility.

When your test fails, you need to know:

  • Which data did it use?
  • Was that data corrupted from a previous run?
  • Did cleanup actually happen?
  • Is this failure related to data state or actual functionality?

This is where test execution history becomes critical. You're not just managing data; you're managing the context around that data across hundreds or thousands of test runs.

For teams running Playwright at scale, platforms like TestDino add this layer of execution intelligence. Instead of digging through CI logs to figure out which UUID your test used, you get automatic tracking of test inputs, execution context, and failure patterns across all runs.

When a test fails because of stale data or a cleanup issue, TestDino's AI-powered failure classification can distinguish between "data collision" and "actual bug," saving hours of manual triage.

If you're evaluating test management tools, execution-level insight into data patterns is what separates basic reporting from actionable intelligence.


Which Pattern Should You Use?

Here's my recommendation for most teams scaling Playwright:

  1. Start with Pattern 2 (unique data per run) → simplest path to reliable parallel execution

  2. Add Pattern 3 (API cleanup) → control database growth

  3. Avoid Pattern 4 (transactions) → unless you're testing data access layers specifically

  4. Skip Pattern 5 (environment isolation) → unless you have dedicated infrastructure teams

And regardless of which pattern you choose, invest in execution visibility early. The debugging time you save will pay for itself within the first sprint.


Your Turn

What's your current test data strategy? Have you found patterns that work better than these?

Drop a comment below 👇 We'd love to hear what's working (or not working) for your team!


If you found this helpful, consider following me for more Playwright testing tips and automation best practices.

Top comments (0)