Running UI Regression Tests on WebdriverIO
How to Perform Visual Regression Testing using WebdriverIO
Over 65% of UI defects stem from unintended visual changes that functional tests fail to catch. These subtle layout shifts, styling issues, or rendering inconsistencies often slip into production unnoticed.
WebdriverIO is a powerful open-source test automation framework for Node.js that enables end-to-end testing of web and mobile applications. It integrates smoothly with modern test runners and supports plugins for extended capabilities like visual regression testing.
This guide explains visual regression testing in WebdriverIO and how to implement it effectively.
What is Visual Regression Testing?
Visual regression testing is a method used to detect unintended changes in a user interface by comparing screenshots over time. It focuses on how an application looks, not just how it functions.
The process works by capturing a baseline image of a page or component. During future test runs, new screenshots are taken and compared against that baseline. If differences exceed a defined threshold, the test flags a failure.
Unlike functional component testing, which checks logic and workflows, visual regression testing tools identify layout shifts, missing elements, broken styling, font inconsistencies, or rendering issues across browsers and devices.
This makes it especially valuable for teams shipping frequent UI updates or maintaining complex design systems.
Visual Regression Testing with WebdriverIO Framework
WebdriverIO allows you to automate browser interactions using JavaScript or TypeScript. By extending it with visual comparison services, you can turn standard UI tests into visual regression checks without changing your overall testing workflow.
The framework works by executing browser commands, navigating to pages, interacting with elements, and capturing screenshots during test execution. These screenshots can then be compared against stored baseline images.
WebdriverIO integrates with image comparison services such as wdio-image-comparison-service, which performs pixel-level analysis to detect visual differences. The service highlights mismatches and applies configurable thresholds to control sensitivity.
Because WebdriverIO supports multiple browsers and cloud providers, visual tests can run across different environments. This helps teams detect browser-specific rendering issues before release.
In short, WebdriverIO provides the automation layer, while image comparison services handle the visual diffing logic.
Don’t sacrifice your testing accuracy when combining visual testing, use Percy with existing testing frameworks
What is Wdio-image-comparison-service?
wdio-image-comparison-service is a WebdriverIO plugin that adds visual UI testing capabilities to your existing automation suite. Instead of only validating functional behavior like clicks and API responses, it enables your tests to verify the visual appearance of pages and components through automated screenshot comparisons.
The service captures screenshots during test execution and compares them against previously stored baseline images. If visual differences exceed a defined threshold, the test fails and generates a diff image highlighting the changes.
Because it integrates directly into the WebdriverIO ecosystem, it works seamlessly with your current test runner, reporters, and CI/CD workflows.
Key Capabilities:
- Full-page screenshot comparison: Capture and compare entire web pages to detect layout shifts, broken UI sections, or missing elements.
- Element-level comparison: Validate specific components such as buttons, banners, or forms without testing the entire page.
- Configurable mismatch thresholds: Define acceptable levels of visual difference to reduce false positives.
- Automatic diff image generation: Generate visual artifacts that highlight exactly where pixel-level differences occurred.
- Cross-browser support: Run visual comparisons across different browsers and environments within your WebdriverIO setup.
Use Cases of Wdio-image-comparison-service
Visual regression testing with wdio-image-comparison-service is especially useful when UI stability is critical. It helps teams detect unintended visual changes early in the development cycle, before they reach production.
Because it integrates directly with WebdriverIO, you can extend your existing functional tests to include visual assertions, making your test suite more comprehensive without building a separate visual testing pipeline.
Here are the notable use cases:
- Detecting layout shifts after CSS changes: When styling updates are pushed, even minor CSS tweaks can break spacing, alignment, or positioning. Visual comparisons immediately highlight unintended layout movement.
- Validating responsive behavior: Capture screenshots at multiple viewport sizes to ensure elements render correctly on desktop, tablet, and mobile screens.
- Component-level UI testing: Test individual UI components like navigation bars, modals, or checkout forms to ensure design consistency across releases.
- Regression checks in CI/CD pipelines: Automatically compare screenshots on every pull request to catch visual issues before merging code.
- Cross-browser rendering validation: Identify visual inconsistencies caused by different browser rendering engines, especially font and spacing differences.
- Preventing UI breakage during refactoring: When restructuring code or migrating frameworks, visual tests ensure that the user-facing interface remains unchanged.
This makes wdio-image-comparison-service a practical solution for teams that want visual coverage without introducing a completely new testing stack.
How to Run Visual Regression Test using WebdriverIO
Setting up visual regression testing with WebdriverIO and wdio-image-comparison-service involves a few structured steps. Below is a practical, end-to-end walkthrough similar to how teams implement it in real-world automation suites.
1. Install WebDriverIO and Required Packages
If you don’t already have a WebdriverIO project, initialize one:
npm init wdio@latest .Then install the image comparison service:
npm install --save-dev wdio-image-comparison-serviceThis package adds visual comparison commands directly to your WebdriverIO test environment.
2. Configure WebDriverIO
Open your wdio.conf.js file and add the image comparison service under services.
services: [ ['image-comparison', { baselineFolder: './tests/visual/baseline/', formatImageName: '{tag}-{logName}-{width}x{height}', screenshotPath: './tests/visual/', savePerInstance: true, autoSaveBaseline: true, blockOutStatusBar: true, blockOutToolBar: true, disableCSSAnimation: true, }] ],
Key Configuration Options
- baselineFolder: Stores reference images.
- screenshotPath: Stores actual and diff images.
- autoSaveBaseline: Automatically creates baseline images on the first run.
- disableCSSAnimation: Prevents flakiness from animated elements.
3. Capture a Baseline Image
On the first test execution, baseline images are created automatically (if autoSaveBaseline: true is enabled).
For example, when running the test for the first time:
npx wdio run wdio.conf.jsThis will generate baseline images inside:
/tests/visual/baseline/These baseline images act as the reference for all future comparisons.
4. Write a Visual Regression Test
Now create a test file, for example: visual.test.js
describe('Homepage Visual Test', () => { it('should match the homepage layout', async () => { await browser.url('https://example.com'); const result = await browser.checkFullPageScreen('homepage'); expect(result).toEqual(0); }); });
Available Commands
- checkFullPageScreen(‘name’): Compares entire page.
- checkScreen(‘name’): Captures current viewport.
- checkElement(element, ‘name’): Compares specific element.
Example for element testing:
it('should match the login button', async () => { await browser.url('https://example.com'); const loginBtn = await $('#login-button'); const result = await browser.checkElement(loginBtn, 'login-button'); expect(result).toEqual(0); });
5. Run the Test
Execute:
npx wdio run wdio.conf.jsDuring execution:
- If no baseline exists; it creates one.
- If baseline exists; it compares new screenshots against it.
- If the mismatch percentage exceeds the threshold; test fails.
6. Review Test Results
After execution, screenshots are saved in structured folders:
baseline/ → Reference images actual/ → Latest test run images diff/ → Highlighted visual differences
If differences are detected:
- The diff image highlights changed pixels.
- The test returns a mismatch percentage.
- You can manually inspect whether changes are expected.
If changes are intentional:
- Replace baseline with new image.
- Commit updated baseline to version control.
Functions Available in WebdriverIO for Image Comparison
The wdio-image-comparison-service extends WebdriverIO with specialized commands for visual testing. These functions allow you to compare entire pages, viewports, or individual elements with minimal code changes.
Below are the most commonly used functions and what they do in practice.
1. checkFullPageScreen()
Captures and compares the entire scrollable page, not just the visible viewport.
const mismatch = await browser.checkFullPageScreen('homepage'); expect(mismatch).toEqual(0);
When to use it:
- Landing pages
- Long product pages
- Layout consistency checks
This method scrolls automatically and stitches screenshots to generate a complete comparison.
2. checkScreen()
Captures only the visible viewport.
const mismatch = await browser.checkScreen('homepage-viewport'); expect(mismatch).toEqual(0);
When to use it:
- Hero sections
- Above-the-fold content
- Fixed header/footer validation
This is faster than full-page comparison and works well for focused layout checks.
3. checkElement()
Compares a specific DOM element instead of the whole page.
const loginBtn = await $('#login-button'); const mismatch = await browser.checkElement(loginBtn, 'login-button'); expect(mismatch).toEqual(0);
When to use it:
- Buttons
- Forms
- Navigation bars
- Modals or dynamic components
Element-level comparison reduces noise and makes tests more stable.
4. saveFullPageScreen()
Captures a screenshot without comparing it.
await browser.saveFullPageScreen('homepage-baseline');This is useful when:
- Manually creating a baseline
- Capturing debug images
- Preparing visual assets for documentation
5. saveScreen()
Saves only the viewport screenshot.
await browser.saveScreen('viewport-image');This is helpful for lightweight visual checks or debugging UI states.
6. saveElement()
Captures a screenshot of a specific element.
const card = await $('.product-card'); await browser.saveElement(card, 'product-card');
This function is often used to establish initial baselines for components.
Understanding the Return Value
All check* methods return a mismatch percentage.
- 0 → No visual difference.
- Greater than 0 → Visual differences detected.
- Above configured threshold → Test fails.
You can configure tolerance in wdio.conf.js:
services: [ ['image-comparison', { savePerInstance: true, autoSaveBaseline: true, blockOutStatusBar: true, blockOutToolBar: true, disableCSSAnimation: true, ignoreAlpha: true, }] ]
Fine-tuning thresholds and settings helps reduce noise while keeping tests reliable.
Using Percy For Visual Testing
While wdio-image-comparison-service relies on pixel-level comparison, Percy approaches visual testing differently. It captures DOM snapshots instead of static screenshots, then renders them across multiple browsers in the cloud. This reduces false positives caused by environment differences such as fonts, rendering engines, or machine-specific settings.
BrowserStack Percy integrates directly with WebdriverIO, allowing you to add visual checks to your existing functional tests without restructuring your framework. Because snapshots are rendered in a consistent cloud environment, teams avoid local rendering inconsistencies and gain reliable cross-browser coverage.
Another key advantage is intelligent diff detection. Instead of failing tests purely on pixel mismatch, Percy highlights meaningful visual changes and provides a review workflow in the dashboard. Teams can approve changes, leave comments, and track visual history over time.
Now, all it takes is a few minutes at most to guarantee that a release hasn’t caused a visual regression. Percy helps me ship releases faster, with more confidence that I haven’t broken something in the process.
Setting Up Percy with WebdriverIO
1. Install Percy Dependencies
npm install --save-dev @percy/cli @percy/webdriverio2. Update WebdriverIO Configuration
Add Percy to your test setup:
const percySnapshot = require('@percy/webdriverio'); describe('Homepage Visual Test with Percy', () => { it('should capture homepage snapshot', async () => { await browser.url('https://example.com'); await percySnapshot(browser, 'Homepage'); }); });
3. Run Tests with Percy
Export your Percy token:
export PERCY_TOKEN=your_project_tokenRun tests using Percy CLI:
npx percy exec -- npx wdio run wdio.conf.jsDuring execution:
- Percy captures DOM snapshots.
- Snapshots are uploaded to Percy.
- Percy renders across supported browsers.
- Visual differences are highlighted in the dashboard.
Why Teams Prefer Percy Over Pure Screenshot Comparison
- Cross-browser rendering in the cloud: Tests run once, but snapshots are rendered across multiple browsers.
- Reduced flakiness: DOM-based snapshotting avoids local environment noise.
- Visual review workflow: Teams can approve, reject, or comment on changes directly in the UI.
- CI/CD integration: Works seamlessly with modern pipelines.
- Parallel snapshot processing: Faster feedback for large test suites.
Percy covers unseen visual regression efficiently, and does more.
Conclusion
Visual regression testing with WebdriverIO gives teams a practical way to catch unintended UI changes alongside functional bugs. Using tools like wdio-image-comparison-service, you can compare full pages, viewports, or individual elements directly within your existing automation suite. This approach works well for controlled environments where pixel-level precision is important.
However, as applications grow and cross-browser coverage becomes critical, tools like Percy provide a more scalable solution. DOM-based snapshot rendering, cloud comparisons, and structured review workflows reduce noise and improve collaboration. Choosing the right approach depends on your project’s complexity, team workflow, and need for cross-browser consistency.
Related Articles
What is Visual Testing [A 2026 Guide]
Many visual bugs slip past automation. Visual testing adds a safety net by comparing how your websit...
A Complete Guide to Visual UI Testing in 2026
Visual UI testing detects visual changes across screens and devices. This guide covers how it works ...
What is Visual Regression Testing [2026]
Visual regression testing detects unintended UI changes by comparing visual baselines. This article ...

