A Playwright-driven suite featuring AI-Powered Self-Healing* API Orchestration, UI Automation, and CI/CD integration.
This framework is designed to provide a robust, scalable, and maintainable automation solution for the Automation Exercise platform. It demonstrates a professional approach to Quality Engineering by balancing speed, reliability, and clear reporting.
- 🤖 AI-Powered Self-Healing: Integrated an AI Bridge that uses a local Ollama instance (Llama 3.2) to dynamically suggest and "heal" broken locators at runtime based on the DOM context.
- Parallelization & Sharding: UI tests are sharded across multiple GitHub Action runners to minimize execution time.
- Page Object Model (POM): Applied to both UI components and API endpoints to centralize logic and reduce maintenance.
- Unified Reporting: Custom GitHub Actions workflow that merges API and UI results into a single, comprehensive HTML dashboard hosted on GitHub Pages.
Key directories and their roles:
flowchart TD
entry[Start Here] --> readme[README.md]
readme --> testsDir[tests]
readme --> srcDir[src]
srcDir --> pagesDir[src/pages]
srcDir --> apiDir[src/api]
srcDir --> fixturesDir[src/fixtures]
srcDir --> utilsDir[src/utils]
srcDir --> aiDir[src/ai-engine]
testsDir --> authDir[tests/auth]
testsDir --> apiTests[tests/api]
testsDir --> e2eTests[tests/e2e]
testsDir --> visualTests[tests/visual]
testsDir --> aiTests[tests/ai-demo]
src/pages– Page Object Model for UI (Home, Login, Cart, Checkout, Payment, etc.).src/api– API client wrappers for backend endpoints.src/fixtures– Shared fixtures (base, user lifecycle, API).src/utils– Helpers (data-helper, test-utils, user-factory).src/ai-engine– AI bridge for self-healing locators.tests/auth– Login and signup flows.tests/api– API specs (user lifecycle, products, brands, login).tests/e2e– End-to-end flows (e.g. place order).tests/visual– Visual regression snapshots.tests/ai-demo– AI self-healing demos.
See PLAN.md for a roadmap and design notes on evolving the framework.
This framework includes a "Pro" feature for local development: an experimental AI-driven self-healing mechanism. Using Ollama (Llama 3.2), the framework can:
- Dynamic Recovery: Detect broken CSS selectors during execution.
- LLM Inference: Send HTML snippets to the local AI to suggest a "healed" selector based on original intent.
- Developer Workflow: These tests are tagged with
@ai-healing. They are designed to be run locally by developers to identify and fix brittle locators before committing code.
Note: These tests are excluded from CI and the standard Docker build to maintain fast execution speeds and avoid infrastructure bottlenecks.
-
Where it lives: AI healing is used only in
@ai-healingsuites. Key files:- tests/ai-demo/self-healing.spec.ts – direct use of the AI bridge and
logHealing. - tests/ai-demo/smart-click.spec.ts – healing via
HomePage.clickContactUsandsmartClick. - src/pages/base.page.ts –
smartClick(selector, goal, meta?)tries the locator, then calls the AI bridge on failure and logs the result. - src/ai-engine/ai-bridge.ts –
askLocalAI/getHealedLocatorOrThrowandlogHealing(original, fixed, goal, meta?)with optional test name, decision, and model.
- tests/ai-demo/self-healing.spec.ts – direct use of the AI bridge and
-
When AI runs: Only when a locator fails (e.g. click times out) inside a flow that uses
smartClickor the bridge. Normal tests do not call the AI. -
Failure handling: If the healed selector is invalid or click fails, the test fails with an error. If the LLM returns an unusable selector,
getHealedLocatorOrThrowthrows so the test fails with a clear message. -
Logs: Every successful healing is appended to
healing-report.logat the project root with: timestamp, test name (if provided), goal, original selector, healed selector, decision, and model. Inspect this file after runningnpx playwright test --grep @ai-healinglocally to see what was healed. -
Note on infrastructure:
@ai-healingtests are designed to run locally where the Ollama model is hosted. If you run them without Ollama installed, you will see the connection error (e.g.ECONNREFUSEDtolocalhost:11434), which makes it clear that Ollama is required.
- Schema Validation: Uses TypeScript interfaces and
expect.any()to verify response structures dynamically, preventing brittle tests. - Negative Testing: Validates edge cases and error handling (e.g., verifying
405 Method Not Allowedfor invalid operations). - Soft Assertions: Employed for bulk data validation (like Brands and Products lists) to ensure comprehensive error logging without stopping the suite prematurely.
- State Management: (In Progress) Implementing
storageStateto share authentication across test shards, bypassing redundant login steps. - Cross-Browser Testing: Configured to run across Chromium, Firefox, and Webkit via Playwright’s engine.
This repo uses Playwright title tags (e.g., @smoke) so you can select suites via --grep:
@smoke: Small critical-path checks for fast feedback (used bynpm run test:smoke).@api: API-focused specs (also selectable via--project=api-tests).@visual: Visual regression specs (used by--project=visual-regressionand--grep @visual).@ai-healing: Local-only AI self-healing demos (excluded from CI by default).@e2e: End-to-end flows (e.g. full place-order journey).@flaky: Temporarily flaky tests; run in isolation withnpm run test:flakyto debug or triage.
The framework reduces flakiness in several ways:
- Retries and diagnostics: In playwright.config.ts, CI runs use
retries: 2,trace: 'on-first-retry', andscreenshot: 'only-on-failure'so failures produce traces and screenshots for debugging. - Visual stability: Visual regression tests in tests/visual use src/utils/test-utils.ts:
TestUtils.blockAds(network blocking and CSS hiding of ad slots) andTestUtils.prepareForScreenshot(font normalization, animations disabled, scrollbar hidden, full-page scroll before capture). Call both beforetoHaveScreenshot()so snapshots are stable across runs. - Managing flaky tests: Tests that are temporarily flaky can be tagged with
@flakyin the title. Run only those withnpm run test:flakyto isolate and fix them without blocking the main suite.
Visual specs in tests/visual follow a consistent pattern: block ads and prepare for screenshot via TestUtils, then capture. Example:
await TestUtils.blockAds(page);- Navigate and wait for the content you need.
await TestUtils.prepareForScreenshot(page);await expect(page).toHaveScreenshot(...);
See tests/visual/home.visual.spec.ts and tests/visual/login.visual.spec.ts.
Curated specs that show what the framework can do:
- Auth: tests/auth/login.spec.ts, tests/auth/signup.spec.ts – Login form validation, happy path, logout, signup with existing-email, and full user lifecycle.
- API: tests/api/user-lifecycle.api.spec.ts – Full user CRUD (create, get, update, delete) with schema validation and negative paths.
- Visual: tests/visual/home.visual.spec.ts, tests/visual/login.visual.spec.ts – Baseline snapshots with
TestUtils.blockAdsandprepareForScreenshotfor stability. - AI demo: tests/ai-demo/smart-click.spec.ts, tests/ai-demo/self-healing.spec.ts – Self-healing locators via
smartClickand the AI bridge when a selector fails. - E2E: tests/e2e/place-order.logged.spec.ts – Full checkout: add product, cart, checkout, payment, and order success message.
- Fixtures: Uses
homePage, cart from “View Cart” modal,CheckoutPage, andPaymentPage(seesrc/pages/checkout.page.ts,src/pages/payment.page.ts).
-
Install Dependencies
npm install
-
Run All Tests
npx playwright test -
Run Specific Suite
npx playwright test --project=api-tests # API Only npx playwright test --project=chromium # UI Only npx playwright test --grep @ai-healing # AI-healing
Note: For ai-healing Start Ollama (Ensure you have Ollama installed and
llama3.2pulled).
The framework can be configured via environment variables to better model real-world environments:
-
PLAYWRIGHT_BASE_URL: Base URL for all UI and API tests.- Default:
https://automationexercise.com - Used in
playwright.config.tsviaappConfig.baseUrl.
- Default:
-
AI_ENABLED: Gate for AI self-healing behaviour.- Default:
true(set to'false'to consider AI disabled in your own code paths).
- Default:
-
AI_BASE_URL: Base URL for the AI / LLM endpoint.- Default:
http://localhost:11434/v1(local Ollama).
- Default:
-
AI_API_KEY: API key used by the AI client.- Default:
ollama(placeholder used for local Ollama setups that do not enforce authentication).
- Default:
-
AI_MODEL: Model name used when calling the AI engine.- Default:
llama3.2:3b.
- Default:
-
TEST_DATA_SEED: Optional. When set (e.g. to an integer), Faker is seeded so user data (names, emails, etc.) is reproducible across runs. See Test Data Strategy below.
All of these variables are wired through src/config.ts and consumed by both playwright.config.ts and src/ai-engine/ai-bridge.ts, so you can easily point the same test suite at different environments without changing code.
- User data generation: src/utils/user-factory.ts uses
@faker-js/fakerto generate user payloads (name, email, password, address, etc.).generateUserData(false)returns minimal required fields;generateUserData(true)adds title, birth date, company, address2, newsletter, and offers so signup/API tests have full profiles. - Lifecycle and cleanup: src/fixtures/user.fixtures.ts defines fixtures that create and (where needed) delete users via the API:
preCreatedUser/preCreatedFullUsercreate a user, yield it to the test, then delete the account in teardown;persistentUseris created once and not deleted by the fixture. tests/global.teardown.ts runs after the suite and deletes the persistent user used by auth setup, then removesplaywright/.authsession and user files so the environment is clean for the next run. - Static expectations: src/utils/data-helper.ts centralizes fixed test data: dropdown options (months, days, years, countries), cart table headers, and the single expected product used for cart flows (e.g. “Stylish Dress”). Use it wherever tests need to assert on known values instead of Faker output.
- Determinism (optional): Set the
TEST_DATA_SEEDenvironment variable to a number (e.g.42) so Faker is seeded at load time. Every run with the same seed will then produce the same sequence of names, emails, and other generated fields. Use this when you need reproducible data (e.g. debugging, CI snapshots, or reducing variance). Leave it unset for realistic, varied data across runs. - Trade-offs: Random data (no seed) exercises more combinations and avoids coupling tests to a single dataset. Seeded data makes failures reproducible and logs/snapshots stable. Prefer random data by default; switch to a seed when debugging flakiness or when you need a fixed dataset.
- Copy the sample file:
cp .env.sample .env - (Optional) Adjust the values if you want to point at a different base URL or AI endpoint.
- Load the variables before running tests, for example:
- Export via your shell:
export $(grep -v '^#' .env | xargs) && npx playwright test - Or use a helper like
dotenv-cli/env-cmdif you prefer.
- Export via your shell:
In CI, set the same variables as environment variables or secrets:
PLAYWRIGHT_BASE_URLAI_ENABLEDAI_BASE_URLAI_API_KEYAI_MODEL
For GitHub Actions, a common pattern is:
- Define repository or environment secrets:
PLAYWRIGHT_BASE_URLAI_ENABLEDAI_BASE_URLAI_API_KEYAI_MODEL
- Map them into your workflow jobs using
env:. For example:
env:
PLAYWRIGHT_BASE_URL: ${{ secrets.PLAYWRIGHT_BASE_URL }}
AI_ENABLED: ${{ secrets.AI_ENABLED }}
AI_BASE_URL: ${{ secrets.AI_BASE_URL }}
AI_API_KEY: ${{ secrets.AI_API_KEY }}
AI_MODEL: ${{ secrets.AI_MODEL }}You can place this env block at the jobs.api-tests, jobs.ui-tests, and/or jobs.merge-reports levels in .github/workflows/playwright.yml depending on which suites you want to configure.
To ensure a consistent environment and avoid "it works on my machine" issues, you can run the suite using the official Playwright Docker image:
-
Build the Image
docker build -t playwright-automation . -
Run Tests in Container
docker run --rm -v $(pwd):/work/ playwright-automation npx playwright test --grep-invert @ai-healing
After execution, the report is automatically generated. To view the latest results locally:
npx playwright show-reportMetrics and reporting for portfolio visibility:
| Tool | Command | Output |
|---|---|---|
| Test Metrics | npm run metrics |
Success rate (%), total duration (ms), passed/failed/skipped/flaky counts, breakdown by project |
| API Coverage | npm run metrics:api-coverage |
Total % of API endpoints covered by automation |
| Trends | npm run trends |
Appends current run to data/trends.json (keeps last 30 entries) |
| Trends Chart | npm run trends:chart |
Generates allure-report/trends-chart.html for historical analytics of success rates and stability. |
| Allure Report | npm run report:allure |
Single-page report with Allure + Trends tabs; Environment shows API_Coverage, Success_Rate, Flaky_Tests; Deployed to GitHub Pages |
Workflow: Run any test suite → npm run metrics → npm run trends → npm run report:allure → share with recruiters.
# Run a suite, then metrics (works for API, guest, logged-in, or full suite)
npx playwright test --project=chromium-guest
npm run metrics
# Or use combined scripts that run tests + metrics
npm run test:api:metrics # API tests + metrics
npm run test:guest:metrics # Guest UI tests + metrics
npm run test:logged:metrics # Logged-in UI tests + metrics
npm run test:all:metrics # Full suite (excl. AI/visual) + metricsAPI Coverage Tracker: Edit coverage-map.json to list endpoints and mark automated: true/false. The script calculates total % coverage.
To make those Docker commands work, make sure you have a simple Dockerfile in your root folder. If you don't have one yet, here is a standard one that works perfectly with Playwright:
# Use the official Playwright image with all browsers pre-installed
FROM mcr.microsoft.com/playwright:v1.57.0-noble
# Set the working directory
WORKDIR /app
# Copy package files and install dependencies
COPY package*.json ./
RUN npm install
# Copy the rest of the project
COPY . .
# Default command
CMD ["npx", "playwright", "test"]