Automated Visual Testing Using XCUITest
Automated Visual Testing Using XCUITest
Ensuring visual consistency on iOS is more complex than it appears.
Apple may control both hardware and software, but developers still need to support multiple iPhone and iPad screen sizes, dynamic type settings, dark mode variations, and frequent iOS updates.
For iOS teams, XCUITest is the primary framework for automating UI tests. It enables developers to simulate real user interactions and validate app behavior within the native iOS testing environment.
In this article, we’ll explore how visual testing works for mobile applications and examine XCUITest’s role in it.
Visual UI Testing For Mobile Applications
Visual UI testing for mobile applications focuses on verifying that screens render correctly across devices, screen sizes, and system settings. It goes beyond checking whether buttons work and instead validates layout alignment, spacing, typography, colors, and component positioning.
On iOS, visual inconsistencies can appear due to dark mode, dynamic type scaling, safe area insets, or OS updates. These issues may not break functionality, but they directly affect user experience and app quality perception.
Most visual UI testing workflows rely on screenshot capture followed by a visual comparison test. A baseline image is stored, and future builds are compared against it to detect unintended UI changes. This helps teams catch visual regressions early and maintain consistent design across releases.
What is XCUITest?
XCUITest is Apple’s native UI testing framework for iOS applications. It is built on top of XCTest and allows developers to automate user interactions within an app. XCUITest runs inside the iOS testing environment, enabling reliable interaction with UI elements such as buttons, text fields, and navigation flows.
Take Your Mobile App Beyond XCUITest
Key Features of XCUITest For Visual Testing
Although primarily designed for functional UI automation, XCUITest can also support visual testing through screenshot capture and comparison workflows.
Here’s some highlighted features of XCUITest which makes it suitable for iOS visual testing:
Screenshot Capture
XCUITest makes it easy to capture screenshots at key checkpoints, such as after navigation, after form submission, or when a modal appears. These screenshots can be used for debugging, documentation, or screenshot-based visual comparisons (when paired with a visual testing tool like BrowserStack App Percy).
Here’s a common pattern using XCUIScreen:
import XCTest final class CheckoutVisualTests: XCTestCase { func testCheckoutScreenScreenshot() { let app = XCUIApplication() app.launch() // Navigate to the screen you want to validate app.buttons["GoToCheckout"].tap() // Capture screenshot let screenshot = XCUIScreen.main.screenshot() let attachment = XCTAttachment(screenshot: screenshot) attachment.name = "Checkout Screen" attachment.lifetime = .keepAlways add(attachment) } }
Tips that help screenshots stay consistent:
- Set predictable app state (seeded test data, fixed locale, stable user account).
- Disable animations where possible (or wait for UI to settle before capturing).
- Capture at stable points, after network calls complete and spinners disappear.
Accessibility
Accessibility is the backbone of stable XCUITest UI targeting. For visual testing, reliable element identification matters because your snapshot testing checkpoint should happen after the correct UI state is reached. The most important practice is adding accessibility identifiers to key views and controls.
Example in SwiftUI:
Button("Pay Now") { // action } .accessibilityIdentifier("PayNowButton")
Example in UIKit:
payNowButton.accessibilityIdentifier = "PayNowButton"Then in XCUITest:
let payNow = app.buttons["PayNowButton"] XCTAssertTrue(payNow.waitForExistence(timeout: 5)) payNow.tap()
Good accessibility hygiene improves visual testing because:
- Tests become less flaky, so screenshots are captured at the intended state.
- Changes in copy (button text) won’t break selectors if IDs stay stable.
- You can target complex screens without brittle hierarchy queries.
Integration with Third-Party Tools
XCUITest provides the execution backbone, and third-party tools add visual comparison, baselines, dashboards, and diff reviews. Most integrations follow the same flow:
- Run XCUITest on a simulator or real device.
- Capture screenshots (or let the tool collect them automatically).
- Upload screenshots as part of CI.
- Compare against baselines, review diffs, and approve intentional changes.
A practical example is saving screenshots to attachments (as shown above) and letting your CI collect test artifacts. Many visual testing tools can consume these artifacts, or you can export images to a known directory.
If you want deterministic, tool-friendly screenshots, teams often:
- Standardize device model(s) used for visual runs (e.g., iPhone 17).
- Pin iOS version for baseline creation.
- Use fixed content (mock server, local stubs) to avoid dynamic UI.
Native Integrations
XCUITest works cleanly with Apple’s native tooling, which makes it easy to scale in iOS pipelines:
- Xcode + XCTest Runner: Run UI tests locally with repeatable configurations.
- xcodebuild: Trigger UI test runs in CI with consistent flags.
- XCTest attachments: Persist screenshots, logs, and diagnostics as artifacts.
- Simulators + real devices: Choose speed (simulator) or realism (device) based on your visual testing needs.
A simple CI-friendly command looks like this:
xcodebuild \ -project MyApp.xcodeproj \ -scheme MyAppUITests \ -destination 'platform=iOS Simulator,name=iPhone 15,OS=latest' \ test
This native alignment is a big reason teams choose XCUITest for UI and visual checkpoints: it fits naturally into iOS development workflows, with minimal glue code.
How Does XCUITest Visual Testing Work?
XCUITest visual testing is usually a repeatable flow: open the app, reach a stable UI state, capture screenshots, then compare them against baselines (manually, or with a visual testing tool).
Step 1: Launch the App in a Known State
Start the app with predictable conditions so screenshots are consistent. A common pattern is using launch arguments or environment flags to enable stubs, mock data, or test accounts.
import XCTest final class VisualFlowTests: XCTestCase { let app = XCUIApplication() override func setUp() { continueAfterFailure = false app.launchArguments = ["-uiTesting", "-useMockServer"] app.launchEnvironment = ["LOCALE": "en_US", "THEME": "light"] app.launch() } }
Step 2: Navigate to the Screen You Want to Validate
Use accessibility identifiers to move through the app like a user would.
func goToProfile() { app.tabBars.buttons["ProfileTab"].tap() XCTAssertTrue(app.staticTexts["ProfileTitle"].waitForExistence(timeout: 5)) }
Step 3: Wait for the UI to Settle
Screenshot comparisons fail if a screen is still loading. Wait for stable UI signals, such as a title, a key component, or a “loading” element disappearing.
func waitForProfileLoaded() { let spinner = app.activityIndicators["LoadingSpinner"] if spinner.exists { XCTAssertFalse(spinner.waitForExistence(timeout: 5)) } XCTAssertTrue(app.staticTexts["ProfileTitle"].exists) }
(If you don’t have stable identifiers for loading states, add them, it pays off quickly.)
Step 4: Capture a Screenshot
Capture the screen at your checkpoint and attach it to the test output. CI systems can collect these attachments as artifacts.
func capture(_ name: String) { let screenshot = XCUIScreen.main.screenshot() let attachment = XCTAttachment(screenshot: screenshot) attachment.name = name attachment.lifetime = .keepAlways add(attachment) }
Step 5: Compare Against a Baseline
XCUITest itself does not include baseline diffing. Most teams do one of the following:
- Save screenshots as CI artifacts and compare using a visual testing platform.
- Use a snapshot testing library for iOS component screenshots.
- Build a custom comparison step (less common, higher maintenance).
A simple pattern is to capture a set of “golden flow” screens in one run:
func testProfileVisualCheckpoint() { goToProfile() waitForProfileLoaded() capture("Profile Screen") }
Step 6: Review Changes and Update Baselines
If differences are expected (new design, spacing updates), approve and update the baseline. If differences are unintended, fix the UI regression and rerun.
The key to making this work long-term is consistency: stable data, stable identifiers, and clear checkpoint selection.
How to Conduct Visual Testing in Landscape or Portrait Mode using XCUITest?
You can change device orientation directly within your XCUITest using the XCUIDevice API. This allows you to validate UI layout behavior in both portrait and landscape modes without manually rotating the simulator or device.
let app = XCUIApplication() app.launch() // Switch to landscape XCUIDevice.shared.orientation = .landscapeLeft // Switch back to portrait XCUIDevice.shared.orientation = .portrait
After changing orientation, wait for the UI to stabilize before capturing a visual checkpoint. Testing both orientations is important because constraints, safe areas, and dynamic layouts often behave differently across modes.
Limitations of XCUITest Visual Testing
While XCUITest provides a solid foundation for UI automation, it has constraints when used for visual validation workflows.
- No Built-In Visual Diff Engine: XCUITest can capture screen states, but it does not provide native image comparison or baseline management. Teams must rely on external tools or custom logic to detect UI changes between builds.
- Device and OS Variability: Minor differences in iOS versions, fonts, or device resolutions can produce inconsistent visual outputs. Maintaining stable reference environments requires strict control over device configuration.
- Manual Baseline Maintenance: Reference images must be stored and updated manually unless integrated with a visual platform. This can increase maintenance effort as the UI evolves frequently.
- Limited Cross-Platform Support: XCUITest is restricted to iOS applications. Teams supporting Android must maintain a separate automation and validation framework.
- No Native Review Dashboard: Test failures appear in standard XCTest logs without a structured UI for reviewing visual differences. Collaborative approval workflows require third-party solutions.
- Scaling Requires External Infrastructure: Running tests across multiple device models and OS versions requires simulators or a device cloud. XCUITest alone does not provide large-scale execution infrastructure.
Running XCUITest Visual Tests With BrowserStack App Percy
XCUITest can validate UI flows locally, but scaling visual validation across multiple real iOS devices requires structured comparison, baseline control, and device coverage.
BrowserStack App Percy is an AI-powered visual automation software that brings advanced visual bug detection with intelligent diffing methods. It filters out dynamic content when performing regression testing at scale, leading to fewer false positives and flaky test results.
App Percy also brings forth an expansive real device cloud, featuring 50,000+ real devices, including Android and iOS devices. Using real devices combined with parallel testing, you can test minor pixel-level UI changes across multiple devices at once.
Percy’s root cause analysis is one of my favorite features. It makes it easy for testers and developers to understand where things have gone wrong… it allows developers to focus on the exact fixes instead of just finding out where the issue is.
Step 1: Create an App Percy project and get your token
Create an App project in Percy and copy the PERCY_TOKEN. Percy uses this token to route your build results to the right org and project.
export PERCY_TOKEN="<YOUR_PERCY_TOKEN>"Step 2: Add the Percy XCUITest SDK to your test target
Add the percy-xcui-swift package to your XCUITest target using Xcode’s Swift Package Manager.
Package URL (use in Xcode): https://github.com/percy/percy-xcui-swift
Step 3: Add Percy capture points in your XCUITest code
Import the SDK and capture visual states where you want coverage (key screens, modals, error states, dark mode, etc.).
import XCTest import PercyXcui final class MyAppUITests: XCTestCase { func testCheckoutFlow_visuals() throws { let app = XCUIApplication() app.launch() // Navigate to a stable UI state app.buttons["GoToCheckout"].tap() // Capture a visual state in Percy let percy = AppPercy() try percy.screenshot(name: "Checkout - Initial") // Example with options (if you need to override UI bar heights) var options = ScreenshotOptions() options.navigationBarHeight = 100 options.statusBarHeight = 100 try percy.screenshot(name: "Checkout - With Options", options: options) } }
Step 4: Run locally (Mac) with Percy CLI
This is useful for quick iteration before running on real iOS devices.
- Add a host entry for Percy:
sudo sh -c 'echo "127.0.0.1 percy.cli" >> /etc/hosts'- Start the Percy capture service, run your UI tests, then stop it:
export PERCY_TOKEN="<YOUR_PERCY_TOKEN>" percy app:exec start # run your XCUITest suite from Xcode or xcodebuild here percy app:exec stop
Step 5: Run on BrowserStack real iOS devices
5a) Upload your .ipa (app under test)
curl -u "YOUR_USERNAME:YOUR_ACCESS_KEY" \ -X POST "https://api-cloud.browserstack.com/app-automate/xcuitest/v2/app" \ -F "file=@/path/to/app/App.ipa"
5b) Upload your XCUITest bundle (test suite .zip)
curl -u "YOUR_USERNAME:YOUR_ACCESS_KEY" \ -X POST "https://api-cloud.browserstack.com/app-automate/xcuitest/v2/test-suite" \ -F "file=@/path/to/tests/MyXCUITests.zip"
5c) Trigger a build with Percy enabled
curl -u "YOUR_USERNAME:YOUR_ACCESS_KEY" \ -X POST "https://api-cloud.browserstack.com/app-automate/xcuitest/v2/build" \ -d '{ "devices": ["iPhone XR-15", "iPhone 13-15"], "app": "bs://<APP_ID>", "testSuite": "bs://<TEST_SUITE_ID>", "appPercy": { "env": { "PERCY_BRANCH": "main" }, "PERCY_TOKEN": "<YOUR_PERCY_TOKEN>" } }' \ -H "Content-Type: application/json"
After the run completes, open your Percy project to review diffs, approve intended UI updates, and track changes over time.
Switch to 10x faster mobile visual automation
App Percy leverages the biggest real device infrastructure to parallel test your mobile application across 30,000+ devices
Conclusion
XCUITest gives iOS teams a reliable way to automate UI flows, but visual consistency requires more than functional validation. By adding structured visual checkpoints and running them across real devices, teams can detect subtle UI regressions before users do.
When combined with BrowserStack App Percy, XCUITest evolves into a scalable visual regression workflow with AI-powered comparison and centralized review. This approach helps iOS teams ship confidently across devices, screen sizes, and iOS versions.
Related Articles
What is Visual Testing [A 2026 Guide]
Many visual bugs slip past automation. Visual testing adds a safety net by comparing how your websit...
A Complete Guide to Visual UI Testing in 2026
Visual UI testing detects visual changes across screens and devices. This guide covers how it works ...
What is Visual Regression Testing [2026]
Visual regression testing detects unintended UI changes by comparing visual baselines. This article ...