StarterApp Docs
Testing

AI-Assisted Testing

Using AI agents to generate tests that follow project conventions

The codebase includes testing context files that help AI agents generate comprehensive tests. This guide explains how to work with AI on testing.

Pro Tip

AI agents read llms/TESTING.md for testing patterns and conventions. Referencing this file in prompts helps AI generate tests that match the codebase style.

Context Files for AI

llms/TESTING.md

Comprehensive testing guide covering:

  • Unit vs integration vs E2E test selection
  • Vitest configuration (jsdom vs node environments)
  • Security assertion requirements
  • Test utility usage patterns
  • Naming conventions (*.unit.test.tsx, *.integration.test.ts)

When working with AI on tests:

"Read llms/TESTING.md and generate unit tests for the UserMenu component"

Test Templates

Working examples throughout the codebase:

  • apps/dashboard/__tests__/ - Real integration and unit tests
  • packages/app-shell/src/lib/test-utils/ - Reusable test utilities
  • Test patterns in actual component __tests__/ directories

Effective AI Prompts

Clear and Specific

Reference Context

Point AI to testing context for comprehensive tests:

"Read llms/TESTING.md and generate integration tests for the support ticket API route.
Include CSRF validation, rate limiting tests, and security header assertions."

Common AI Workflows

Generating Component Tests

1. "Read llms/TESTING.md"
2. "Generate unit tests for the BillingDashboard component"
3. "Include tests for loading state, upgrade button, and error handling"

AI will generate:

import { render, screen } from "@testing-library/react";
import { expect, test, vi } from "vitest";
import { TestServicesProvider } from "@workspace/app-shell/lib/test-utils";

test("shows loading state", () => {
  const mockBilling = {
    useCustomer: vi.fn().mockReturnValue({ loading: true, customer: null }),
    checkout: vi.fn(),
    attach: vi.fn(),
    check: vi.fn(),
  };

  render(
    <TestServicesProvider services={{ billing: mockBilling }}>
      <BillingDashboard />
    </TestServicesProvider>
  );

  expect(screen.getByText(/loading/i)).toBeInTheDocument();
});

Generating API Route Tests

"Read llms/TESTING.md and generate integration tests for /api/user/settings.
Use Node environment, mock auth, and validate security headers with assertAuthHardenerHeaders."

AI will generate proper Node environment tests with security validation.

Generating E2E Tests

"Read llms/TESTING.md and generate E2E smoke test that verifies:
1. Homepage loads without errors
2. Protected routes redirect to sign-in
3. Security headers are present"

Validation After AI Generates Tests

Run the Tests

pnpm test path/to/generated.test.ts

Verify tests pass and cover expected scenarios.

Check Environment

Verify AI used correct test environment:

// API routes, middleware, server actions
/** @vitest-environment node */

// Components, hooks (default)
// No annotation needed

Verify Imports

Check that AI imported from correct paths:

// ✅ Correct
import { TestServicesProvider } from "@workspace/app-shell/lib/test-utils";

// ❌ Wrong
import { TestServicesProvider } from "~/lib/test-utils";

Check Security Assertions

Verify AI included security checks per llms/TESTING.md:

// For dashboard pages
expect(csp).toContain("'strict-dynamic'");
expect(headers.get("x-nonce")).toBeTruthy();

// For API routes
assertAuthHardenerHeaders(response.headers);

Common AI Mistakes

Best Practices with AI

1. Always Reference Testing Context

"Read llms/TESTING.md and generate..."

This ensures AI follows project testing conventions.

2. Specify Test Type

"Generate unit tests..." or "Generate integration tests..." or "Generate E2E tests..."

Be explicit about what type of test you need.

3. Request Security Validation

"Include security header assertions and CSRF validation per TESTING.md"

4. Validate Generated Tests

Check that AI tests:

  • Use correct test environment (@vitest-environment node for API routes)
  • Import from @workspace/app-shell/lib/test-utils
  • Use semantic queries (getByRole, not CSS selectors)
  • Include security assertions for protected routes
  • Have proper mocking at boundaries

5. Run Tests Immediately

pnpm test path/to/generated.test.ts

Don't commit untested AI-generated code.

Testing Checklist for AI Code

When AI generates tests, verify:

  • Correct test environment annotation (Node for API routes)
  • Imports from @workspace/app-shell/lib/test-utils
  • Uses TestServicesProvider for components with services
  • Semantic queries (getByRole, getByLabelText) not CSS selectors
  • Security assertions for protected routes/APIs
  • Mocks at boundaries (external APIs, not internal functions)
  • Tests pass when run with pnpm test

Templates for AI

Point AI to these patterns: