Configuration File

When you launch TestCafe from the command line, you can specify launch options, like so: testcafe all tests/sample-fixture.js -q.

As the list of options grows in size, it becomes harder to maintain. This is why launch settings should be stored in a dedicated configuration file.

Configuration conflicts

Command line options and TestCafe Runner options have precedence over configuration file settings. When TestCafe overrides a configuration file setting, it outputs a description of the conflict to the console.

Set-up

Location

On startup, TestCafe looks for a configuration file (.testcaferc.js or .testcaferc.json) in the current working directory. If you store the configuration file elsewhere, you can specify the path to a custom configuration file with the --config-file command line option.

Name

The default base name for the configuration file is .testcaferc. Set the --config-file command line option to use a configuration file with a custom name.

TestCafe supports two configuration file formats: .js and .json. Take a look at the Formats section for a comparison of these configuration file formats.

Configuration file priority

TestCafe can only use one configuration file at a time.

  • If a user does not set the --config-file option, TestCafe searches the current working directory for a file named .testcaferc.js.
  • If .testcaferc.js does not exist, TestCafe looks for a file named .testcaferc.json.
  • If neither file exists, TestCafe does not load any configuration files.

Formats

JSON

JSON configuration files store settings in key-value pairs. These files adhere to the JSON5 standard. They support JavaScript identifiers as object keys, single-quoted strings, comments, and other modern features.

.testcaferc.json:

{
    skipJsErrors: true,
    hostname: "localhost",
    // other settings
}

You can find a sample JSON configuration file in the GitHub repository.

JavaScript

JavaScript configuration files store settings in key-value pairs within a module.exports statement. TestCafe cannot access the settings that the user did not export.

JavaScript configuration files can contain definitions for global test hooks and global request hooks.

Configuration properties in JavaScript files can reference JavaScript methods, functions, and variables, which makes it easy to create dynamic configuration files.

Use CommonJs require syntax to access Node.js modules inside your configuration file.

let os = require("os"); // import the entire module
let myModule = require ("/my-module.js") // import a module by path
let { property } = require ("module") // import a particular property from the module

.testcaferc.js:

let os = require("os");

module.exports = {
    skipJsErrors: true,
    hostname: os.hostname(),
    // other settings
}

Settings

Configuration files can include the following settings:

browsers

Specifies one or several browsers in which a test should run.

You can use browser aliases to specify locally installed browsers.

{
    "browsers": "chrome"
}
{
    "browsers": ["ie", "firefox"]
}

Use the all alias to run tests in all the installed browsers.

Use the path: prefix to specify a path to the browser’s executable file. Enclose the path in backticks if it contains spaces.

{
    "browsers": "path:`C:\\Program Files\\Internet Explorer\\iexplore.exe`"
}

Alternatively, you can pass an object whose path property specifies the path to the browser’s executable file. In this case, you can also use an optional cmd property to specify command line parameters passed to the browser.

{
    "browsers": {
        "path": "/home/user/portable/firefox.app",
        "cmd": "--no-remote"
    }
}

To run tests in cloud browsers or other browsers accessed through a browser provider plugin, specify the browser’s alias that consists of the {browser-provider-name} prefix and the name of the browser (the latter can be omitted); for example, saucelabs:Chrome@52.0:Windows 8.1.

{
    "browsers": "saucelabs:Chrome@52.0:Windows 8.1"
}

To run tests in a browser on a remote device, specify remote as a browser alias.

If you want to connect multiple browsers, specify remote: and the number of browsers. For example, if you need to use four remote browsers, specify remote:4.

{
    "browsers": "remote:4"
}

You can add postfixes to browser aliases to run tests in headless mode, use Chrome device emulation or user profiles.

{
    "browsers": ["firefox:headless", "chrome:emulation:device=iphone X"]
}

Note

You cannot add postfixes when you use the path: prefix or pass a { path, cmd } object.

CLI: Browser List
API: runner.browsers, BrowserConnection

src

Specifies files or directories from which to run tests.

TestCafe can run tests from the following file types:

  • JavaScript, TypeScript, and CoffeeScript files that use TestCafe API.
  • TestCafe Studio tests (.testcafe files).
  • Legacy TestCafe v2015.1 tests.
{
    "src": "/home/user/tests/fixture.js"
}
{
    "src": ["/home/user/auth-tests/fixture.testcafe", "/home/user/mobile-tests/"]
}

You can use glob patterns to specify a set of files.

{
    "src": ["/home/user/tests/**/*.js", "!/home/user/tests/foo.js"]
}

CLI: File Path/Glob Pattern
API: runner.src

baseURL

The baseURL option defines the starting URL for all tests and fixtures in your test suite.

{
    "baseUrl": "https://devexpress.github.io/testcafe"
}

You can override this option with the fixture.page and test.page methods.

CLI: --base-url
API: run.baseUrl

reporter

Specifies the name of a built-in or custom reporter that generates test reports.

{
    "reporter": "list"
}

This configuration outputs the test report to stdout. To save a report to a file, pass an object whose name property specifies the reporter name and output property specifies the path to the file.

{
    "reporter": {
        "name": "xunit",
        "output": "reports/report.xml"
    }
}

You can use multiple reporters, but note that only one reporter can write to stdout. All other reporters must output to files.

{
    "reporter": [
        {
            "name": "spec"
        },
        {
            "name": "json",
            "output": "reports/report.json"
        }
    ]
}

CLI: -r, --reporter
API: runner.reporter

screenshots

Allows you to specify the screenshot options.

screenshots.path

Specifies the base directory in which to save screenshots.

{
    "screenshots": {
        "path": "/home/user/tests/screenshots/"
    }
}

See Screenshots for details.

CLI: --screenshots path
API: runner.screenshots

screenshots.takeOnFails

Specifies whether to take a screenshot when a test fails.

{
    "screenshots": {
        "takeOnFails": true
    }
}

TestCafe saves screenshots to the directory specified in the screenshots.path property.

CLI: --screenshots takeOnFails
API: runner.screenshots

screenshots.pathPattern

Specifies a custom pattern to compose screenshot files’ relative path and name.

{
    "screenshots": {
        "pathPattern": "${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png"
    }
}

See Path Pattern Placeholders for information about available placeholders.

CLI: --screenshots pathPattern
API: runner.screenshots

screenshots.fullPage

Specifies whether to capture the full page, including invisible elements.

{
    "screenshots": {
        "fullPage": true
    }
}

CLI: --screenshots fullPage
API: runner.screenshots

screenshots.thumbnails

Specifies whether to make thumbnails for captured screenshots.

{
    "screenshots": {
        "thumbnails": false
    }
}

TestCafe saves thumbnails to the directory specified in the screenshots.path property.

CLI: --screenshots thumbnails
API: runner.screenshots

disableScreenshots

Prevents TestCafe from taking screenshots.

{
    "disableScreenshots": true
}

When this property is specified, TestCafe does not take screenshots when a test fails and ignores the screenshot action.

CLI: --disable-screenshots
API: runner.run({ disableScreenshots })

screenshotPath

Deprecated. Enables screenshots and specifies the base directory where they are saved.

{
    "screenshotPath": "/home/user/tests/screenshots/"
}

In v1.5.0 and later, screenshots are enabled (the default setting) and saved to ./screenshots.

To save them to a different location, specify the screenshots.path property:

{
    "screenshots": {
        "path": "/home/user/tests/screenshots/"
    }
}

Use the disableScreenshots property to prevent TestCafe from taking screenshots:

{
    "disableScreenshots": true
}

takeScreenshotsOnFails

Deprecated. Specifies that TestCafe should take a screenshot whenever a test fails.

{
    "takeScreenshotsOnFails": true
}

In v1.5.0 and later, use the screenshots.takeOnFails property:

{
    "screenshots": {
        "takeOnFails": true
    }
}

screenshotPathPattern

Deprecated. Specifies a custom pattern to compose screenshot files’ relative path and name.

{
    "screenshotPathPattern": "${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png"
}

In v1.5.0 and later, use the screenshots.pathPattern property:

{
    "screenshots": {
        "pathPattern": "${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png"
    }
}

videoPath

Enables TestCafe to record videos of test runs and specifies the base directory to save these videos.

{
    "videoPath": "reports/screen-captures"
}

See Record Videos for details.

CLI: --video
API: runner.video

videoOptions

Specifies options that define how TestCafe records videos of test runs.

{
    "videoOptions": {
        "singleFile": true,
        "failedOnly": true,
        "pathPattern": "${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.mp4"
    }
}

See Basic Video Options for available options.

Note

Use the videoPath option to enable video recording.

CLI: --video-options
API: runner.video

videoEncodingOptions

Specifies video encoding options.

{
    "videoEncodingOptions": {
        "r": 20,
        "aspect": "4:3"
    }
}

You can pass all the options supported by the FFmpeg library. Refer to the FFmpeg documentation for information about all available options.

Note

Use the videoPath option to enable video recording.

CLI: --video-encoding-options
API: runner.video

quarantineMode

Enable quarantine mode to eliminate false negatives and detect unstable tests. TestCafe quarantines tests that fail and repeats them until they yield conclusive results.

{
    "quarantineMode": true
}

CLI: -q, --quarantine-mode
API: runner.run({ quarantineMode })

quarantineMode.successThreshold

The number of successful attempts necessary to confirm a test’s success. The option value should be greater than 0. The default value is 3.

{
    "quarantineMode": {
        "successThreshold": 2
    }
}

CLI: -q successThreshold=N, --quarantine-mode successThreshold=N
API: runner.run({ quarantineMode: { successThreshold: N } })

quarantineMode.attemptLimit

The maximum number of test execution attempts. The option value should exceed the value of the successThreshold. The default value is 5.

{
    "quarantineMode": {
        "attemptLimit": 3
    }
}

CLI: -q attemptLimit=N, --quarantine-mode attemptLimit=N
API: runner.run({ quarantineMode: { attemptLimit: N } })

debugMode

Runs tests in debug mode.

{
    "debugMode": true
}

See the --debug-mode command line parameter for details.

CLI: -d, --debug-mode
API: runner.run({ debugMode })

debugOnFail

Specifies whether to enter debug mode when a test fails.

{
    "debugOnFail": true
}

If this option is enabled, TestCafe pauses the test when it fails. This allows you to view the tested page and determine the cause of the fail.

Click the Finish button in the footer to end test execution.

CLI: --debug-on-fail
API: runner.run({ debugOnFail })

experimentalDebug

Important

This debug mode is experimental.

Specify this option to evaluate selectors and client functions in the Watch panel of a Node.js debugger.

{
    "experimentalDebug": true
}

In the debugger, call selectors and client functions as synchronous. For example, your test contains the following code:

await Selector('body').innerText
...
await ClientFunction(() => location.href)

To evaluate these expressions in your debugger, specify them without the await keyword:

Selector('body')().innerText
...
ClientFunction(() => location.href)()

See the following topics for details:

Limitations

  • Use Node.js v14.0.0 and later.
  • In TypeScript tests, TestCafe fails to evaluate selectors and client functions from the Page Model if a test does not use them in code. This case occurs because TypeScript removes unused module imports. See also: FAQ: Why are imports being elided in my emit?.
  • TestCafe supports experimental debug mode for the .js and .ts test files and does not support coffeeScript tests.

CLI: --experimental-debug
API: runner.run({ experimentalDebug })

skipJsErrors

Main article: Skip JavaScript Errors

TestCafe tests fail when a page yields a JavaScript error. Use the skipJsErrors option to ignore JavaScript errors.

Important

Errors are signs of malfunction. Do not ignore errors that you can fix.
If a page outputs unwarranted error messages, modify your application to prevent this behavior.
Use the skipJsErrors option to silence errors that you cannot act upon.

Skip all JavaScript errors

If you don’t specify additional options, TestCafe ignores all JavaScript errors:

{
    "skipJsErrors": true
}

Skip JavaScript errors by message, URL, and stack

Specify options to filter errors by string or regular expression.

Warning

Enclose regular expressions in forward slashes to avoid strict matches for special characters.

  • If you specify the message option, TestCafe ignores JavaScript errors with messages that match the regular expression:
    {
        "skipJsErrors": {
            "message": /.*User ID.*/ig
        }
    }
    
  • If you specify the pageUrl option, TestCafe ignores JavaScript errors on pages with URLs that match the regular expression:
    {
        "skipJsErrors": {
            "pageUrl": /.*.*html/
        }
    }
    
  • If you specify the stack option, TestCafe ignores JavaScript errors with call stacks that match the regular expression:
    {
        "skipJsErrors": {
            "stack": /.*jquery.*/ig
        }
    }
    
  • Specify several arguments to skip errors that fit multiple criteria at once — for example, errors with a specific message and a specific call stack.
    {
        "skipJsErrors": {
            "stack": "/.*jquery.*/",
            "message": "/.*User ID.*/ig"
        }
    }
    

Use custom logic to skip JavaScript errors

Use a JavaScript configuration file to define a callback function with custom logic:

const callbackFunction = {
    fn:           ({ message }) => message.includes('User') || stack.includes('jquery');
};

module.exports = {
    skipJsErrors: callbackFunction,
};

CLI: -e, --skip-js-errors
API: runner.run({ skipJsErrors })

skipUncaughtErrors

Ignores uncaught errors and unhandled promise rejections in test code.

{
    "skipUncaughtErrors": true
}

When an uncaught error or unhandled promise rejection occurs on the server during test execution, TestCafe stops the test and posts an error message to a report. To ignore these errors, use the skipUncaughtErrors property.

CLI: -u, --skip-uncaught-errors
API: runner.run({ skipUncaughtErrors })

filter

Allows you to specify which tests or fixtures to run. Use the following properties individually or in combination.

filter.test

Runs a test with the specified name.

{
    "filter": {
        "test": "Click a label"
    }
}

CLI: -t, --test
API: runner.filter

filter.testGrep

Runs tests whose names match the specified grep pattern.

{
    "filter": {
        "testGrep": "Click.*"
    }
}

CLI: -T, --test-grep
API: runner.filter

filter.fixture

Runs a test with the specified fixture name.

{
    "filter": {
        "fixture": "Sample fixture"
    }
}

CLI: -f, --fixture
API: runner.filter

filter.fixtureGrep

Runs tests whose fixture names match the specified grep pattern.

{
    "filter": {
        "fixtureGrep": "Page.*"
    }
}

CLI: -F, --fixture-grep
API: runner.filter

filter.testMeta

Runs tests with the specified metadata.

{
    "filter": {
        "testMeta": {
            "device": "mobile",
            "env": "production"
        }
    }
}

The configuration example above instructs TestCafe to only run tests that meet the following requirements:

  • The value of the device metadata property is mobile.
  • The value of the env metadata property is production.

CLI: --test-meta
API: runner.filter

filter.fixtureMeta

Runs fixtures with the specified metadata.

{
    "filter": {
        "fixtureMeta": {
            "device": "mobile",
            "env": "production"
        }
    }
}

The configuration example above instructs TestCafe to only run fixtures that meet the following requirements:

  • The value of the device metadata property is mobile.
  • The value of the env metadata property is production.

CLI: --fixture-meta
API: runner.filter

appCommand

Executes a shell command and runs tests.

{
    "appCommand": "node server.js"
}

Use the appCommand property to launch the application you need to test. TestCafe terminates this application after testing is completed.

The appInitDelay property specifies the amount of time allowed for this command to initialize the tested application.

Note

TestCafe adds node_modules/.bin to PATH, so that you can use the binaries from the locally installed dependencies without prefixes.

CLI: -a, --app
API: runner.startApp

appInitDelay

Specifies the time (in milliseconds) during which an application is initialized if you launch the application with the --app option.

TestCafe waits for the specified amount of time before it starts tests.

{
    "appCommand": "node server.js",
    "appInitDelay": 3000
}

Default value: 1000

CLI: --app-init-delay
API: runner.startApp

concurrency

Specifies the number of browser instances that should run tests concurrently.

{
    "concurrency": 3
}

TestCafe opens several instances of the same browser and creates a pool of browser instances. Tests are run concurrently against this pool; that is, each test is run in the first free instance.

See Concurrent Test Execution for more information about concurrent test execution.

CLI: -c, --concurrency
API: runner.concurrency

selectorTimeout

Specifies the time (in milliseconds) within which selectors attempt to return a node. See Selector Timeout for details.

{
    "selectorTimeout": 3000
}

Default value: 10000

CLI: --selector-timeout
API: runner.run({ selectorTimeout })

assertionTimeout

Specifies the time (in milliseconds) within which TestCafe attempts to successfully execute an assertion if you pass a selector property or a client function as an actual value.

See Smart Assertion Query Mechanism.

{
    "assertionTimeout": 1000
}

Default value: 3000

CLI: --assertion-timeout
API: runner.run({ assertionTimeout })

pageLoadTimeout

Specifies the time (in milliseconds) passed after the DOMContentLoaded event within which TestCafe waits for the window.load event to fire.

After the timeout passes or the window.load event is raised (whichever happens first), TestCafe starts the test.

{
    "pageLoadTimeout": 1000
}

Default value: 3000

See the command line --page-load-timeout parameter for details.

CLI: --page-load-timeout
API: runner.run({ pageLoadTimeout })

ajaxRequestTimeout

Specifies wait time (in milliseconds) for fetch/XHR requests. If TestCafe receives no response within the specified period, it throws an error.

{
    "ajaxRequestTimeout": 40000
}

Default value: 120000

CLI: --ajax-request-timeout

pageRequestTimeout

Specifies time (in milliseconds) to wait for HTML pages. If TestCafe does not receive a page within the specified period, it throws an error.

{
    "pageRequestTimeout": 8000
}

Default value: 25000

CLI: --page-request-timeout

browserInitTimeout

Time (in milliseconds) for browsers to connect to TestCafe and report that they are ready to test. If one or more browsers fail to connect within the specified period, TestCafe throws an error.

{
    "browserInitTimeout": 180000
}

In this example, the timeout for local and remote browsers is three minutes. In this run, all browsers have to connect within this time before TestCafe throws an error.

Default values:

CLI: --browser-init-timeout
API: runner.run({ browserInitTimeout })

testExecutionTimeout

Maximum test execution time (in milliseconds). When the total execution time of a test exceeds this value, TestCafe terminates the test. This behavior occurs even if the browser is responsive.

{
    "testExecutionTimeout": 180000
}

CLI: --test-execution-timeout

runExecutionTimeout

Maximum test run execution time (in milliseconds). When the total execution time of a run exceeds this value, TestCafe terminates the test run. This behavior occurs even if one of the tests or hooks is active.

{
    "runExecutionTimeout": 180000
}

CLI: --run-execution-timeout

speed

Specifies the test execution speed.

If this option is not specified, TestCafe runs tests at the maximum speed. You can use this option to slow a test down. The number range is between 1 (the fastest) and 0.01 (the slowest).

{
    "speed": 0.1
}

Default value: 1

If you specify the speed for an individual action, the action’s speed setting overrides the test speed.

CLI: --speed
API: runner.run({ speed })

clientScripts

Injects scripts into pages visited during tests. Use this property to introduce client-side mock functions or helper scripts.

{
    "clientScripts": [
        {
            "module": "lodash"
        },
        {
            "path": "scripts/react-helpers.js",
            "page": "https://myapp.com/page/"
        }
    ]
}

Inject a JavaScript File

Specify the JavaScript file path to inject the content of this file into the tested pages. You can pass a string or object with the path property.

{
    "clientScripts": "<filePath>" | { "path": "<filePath>" }
}
{
    "clientScripts": [ "<filePath>" | { "path": "<filePath>" } ]
}
Argument Type Description
filePath String The path to the JavaScript file whose content should be injected.

Note

You cannot combine the path, module and content properties in a single object. To inject multiple items, pass several arguments or an array.

TestCafe resolves relative paths against the current working directory.

Example

{
    "clientScripts": "assets/jquery.js",
    // or
    "clientScripts": { "path": "assets/jquery.js" }
}

Inject a Module

Specify the Node.js module’s name to inject its content into the tested pages. Use an object with the module property.

{
    "clientScripts": { "module": "<moduleName>" }
}
{
    "clientScripts": [ { "module": "<moduleName>" } ]
}
Argument Type Description
moduleName String The module name.

Note

You cannot combine the module, path and content properties in a single object. To inject multiple items, pass several arguments or an array.

TestCafe uses Node.js mechanisms to search for the module’s entry point and injects its content into the tested page.

The browser must be able to execute the injected module. For example, modules that implement the UMD API can run in most modern browsers.

Note

If the injected module has dependencies, ensure that the dependencies can be loaded as global variables and these variables are initialized in the page’s code.

Example

{
    "clientScripts": {
        "module": "lodash"
    }
}

Inject Script Code

You can pass an object with the content property to provide the injected script as a string.

{
    "clientScripts": { "content": "<code>" }
}
{
    "clientScripts": [ { "content": "<code>" } ]
}
Argument Type Description
code String JavaScript that should be injected.

Note

You cannot combine the content, path and module properties in a single object. To inject multiple items, pass several arguments or an array.

Example

{
    "clientScripts": {
        "content": "Date.prototype.getTime = () => 42;"
    }
}

Provide Scripts for Specific Pages

You can also specify pages into which a script should be injected. This will allow you to mock browser API on specified pages and use the default behavior everywhere else.

To specify target pages for a script, add the page property to the object you pass to clientScripts.

{
    "clientScripts": {
        "page": "<url>",
        "path": "<filePath>" | "module": "<moduleName>" | "content": "<code>"
    }
}
{
    "clientScripts": [
        {
            "page": "<url>",
            "path": "<filePath>" | "module": "<moduleName>" | "content": "<code>"
        }
    ]
}
Property Type Description
url String Specify a page URL to add scripts to a page.

Note

If the target page redirects to a different URL, ensure that the page property matches the destination URL. Otherwise, scripts are not injected.

Example

{
    "clientScripts": {
        "page": "https://myapp.com/page/",
        "content": "Geolocation.prototype.getCurrentPosition = () => new Positon(0, 0);"
    }
}

Note

Note that regular expressions are not supported in the configuration file. Use the following APIs to define target pages with a regular expression:

The fixture.clientScripts and test.clientScripts methods allow you to inject scripts into pages visited during an individual fixture or test.

For more information, see Inject Scripts into Tested Pages.

CLI: --cs, --client-scripts
API: runner.clientScripts

port1, port2

Specifies custom port numbers TestCafe uses to perform testing. The number range is [0-65535].

{
    "port1": 12345,
    "port2": 54321
}

TestCafe automatically selects ports if ports are not specified.

CLI: --ports
API: createTestCafe

hostname

Specifies your computer’s hostname. TestCafe uses this hostname when you run tests in remote browsers.

{
    "hostname": "host.mycorp.com"
}

If the hostname is not specified, TestCafe uses the operating system’s hostname or the current machine’s network IP address.

CLI: --hostname
API: createTestCafe

proxy

Specifies the proxy server used in your local network to access the Internet.

{
    "proxy": "proxy.corp.mycompany.com"
}
{
    "proxy": "172.0.10.10:8080"
}

You can also specify authentication credentials with the proxy host.

{
    "proxy": "username:password@proxy.mycorp.com"
}

CLI: --proxy
API: runner.useProxy

proxyBypass

Requires that TestCafe bypasses the proxy server to access specified resources.

{
    "proxyBypass": "*.mycompany.com"
}
{
    "proxyBypass": ["localhost:8080", "internal-resource.corp.mycompany.com"]
}

See the --proxy-bypass command line parameter for details.

CLI: --proxy-bypass
API: runner.useProxy

ssl

Provides options that allow you to establish an HTTPS connection between the client browser and the TestCafe server.

{
    "ssl": {
        "pfx": "path/to/file.pfx",
        "rejectUnauthorized": true
    }
}

See the --ssl command line parameter for details.

CLI: --ssl
API: createTestCafe

developmentMode

Enables mechanisms to log and diagnose errors. Enable this option if you plan to contact TestCafe Support to report an issue.

{
    "developmentMode": true
}

CLI: --dev
API: createTestCafe

qrCode

If you launch TestCafe from the console, this option outputs a QR code that contains URLs used to connect the remote browsers.

{
    "qrCode": true
}

CLI: --qr-code

stopOnFirstFail

Stops a test run if a test fails.

{
    "stopOnFirstFail": true
}

CLI: --sf, --stop-on-first-fail
API: runner.run({ stopOnFirstFail })

tsConfigPath

Deprecated as of TestCafe v.1.10.0 in favor of the compilerOptions setting.

CLI: --ts-config-path
API: runner.tsConfigPath

compilerOptions

Specifies test compilation settings. The current version of TestCafe can only configure the TypeScript compiler.

{
    "compilerOptions": {
         "typescript": {
              "customCompilerModulePath": "path to custom Typescript compiler module",
              "options": { "experimentalDecorators": "true",  "newLine": "crlf"}
    }
}

Populate the typescript.options object with TypeScript compiler options.

Set the typescript.configPath parameter to load TypeScript compilation settings from a dedicated tsconfig.json file.

{
    "compilerOptions": {
         "typescript": { "configPath": "path-to-custom-ts-config.json"}
    }
}

Set the typescript.compilerModulePath parameter to load an external TypeScript compiler.

{
   "compilerOptions": {
       "typescript":   { "customCompilerModulePath": "path to custom Typescript compiler module" }
    }
}

Note

TestCafe resolves user-specified relative paths against the TestCafe installation folder.

CLI: --compiler-options API: runner.compilerOptions

cache

If this option is enabled, the TestCafe proxy caches assets (such as stylesheets and scripts) of the processed web pages. The next time the proxy accesses the page, it pulls assets from its cache instead of requesting them from the server.

testcafe chrome my-tests --cache

TestCafe emulates the browser’s native caching behavior. For example, in Chrome, TestCafe only caches resources that Chrome itself would cache if run without TestCafe.

TestCafe caches scripts, stylesheets, fonts, and other assets up to 5 MB in size. TestCafe does not cache HTML because that could break user roles.

If the tested application loads many heavy assets, enable server-side caching to decrease test run time.

CLI: --cache
API: createTestCafe

disablePageCaching

Prevents the browser from caching page content.

{
    "disablePageCaching": true
}

Users may inadvertently access cached pages that contain outdated automation scripts, for example, when they activate a Role. This can lead to TestCafe errors. If you encounter this issue, enable the disablePageCaching option to prevent the browser from caching automation scripts.

For more information, see .

You can disable page caching for an individual fixture or test.

CLI: --disable-page-caching
API: runner.run({ disablePageCaching })

disableMultipleWindows

Disables support for multi-window testing.

{
    "disableMultipleWindows": true
}

The disableMultipleWindows option disables support for multi-window testing in Chrome and Firefox. Use this setting if you encounter compatibility issues with your existing tests.

CLI: --disable-multiple-windows
API: runner.run({ disableMultipleWindows })

retryTestPages

If this option is enabled, TestCafe retries failed network requests for web pages visited during tests. The retry functionality is limited to ten tries.

{
    "retryTestPages": true
}

This feature uses Service Workers that require a secure connection. To run TestCafe over a secure connection, set up HTTPS or use the --hostname localhost option.

CLI: --retry-test-pages API: createTestCafe

color

Enables colors in the command line.

{
    "color": true
}

CLI: --color

noColor

Disables colors in the command line.

{
    "noColor": true
}

CLI: --no-color

userVariables

Specifies user variables and their values.

Follow the steps below to add user variables and use them in a test:

  1. Add pairs of variable names and values to the userVariables option within the testcaferc.json file.

  2. Import the userVariables from the testcafe module to the test.

.testcaferc.json:

{
  "userVariables": {
    "url": "http://devexpress.github.io/testcafe/example",
  }
}

test.js:

const { userVariables } = require('testcafe');

fixture `Test user variables`
    .page(userVariables.url);

test('Type text', async t => {
    await t
        .typeText('#developer-name', 'John Smith')
        .click('#submit-button');
});

disableHttp2

Disables support for HTTP/2 requests.

{
    "disableHttp2": true
}

CLI: --disable-http2

hooks

If you use a JavaScript configuration file, you can define global test hooks and request hooks.

Test Hooks

Main article: Test Hooks.

There are three kinds of global test hooks.

Test run hooks

Test run hooks run when you launch TestCafe and just before the TestCafe process terminates.

The following example declares a before test run hook and an after test run hook.

const utils = require('./my-utils.js');
const { admin } = require('roles');

module.exports = {
    hooks: {
        testRun: {
            before: async ctx => {
                ctx.serverId = 123;

                utils.launchServer(ctx.serverId);
            },
            after:  async ctx => {
                utils.terminateServer(ctx.serverId);
            }
        },
    }
};
Global fixture hooks

Global fixture hooks run before or after each of the fixtures in your test suite.

The following example declares a global before fixture run hook and a global after fixture hook:

const utils = require('./my-utils.js');
const { admin } = require('roles');

module.exports = {
    hooks: {
        fixture: {
            before: async ctx => {
                ctx.dbName = 'users';

                utils.populateDb(ctx.dbName);
            },
            after:  async ctx => {
                utils.dropDb(ctx.dbName);
            },
        },
    }
};
Global test hooks

Global test hooks run before or after each of the tests in your test suite.

The following example declares a global before test hook and a global after test hook:

const utils = require('./my-utils.js');
const { admin } = require('roles');

module.exports = {
    hooks: {
        test:    {
            before: async t => {
                t.ctx = 'test data';

                await t.useRole(admin);
            },
            after:  async t => {
                await t.click('#delete-data');
                console.log(t.ctx); // > test data
            }
        }
    }
};

Request Hooks

You can define global RequestHooks, Request Loggers, and Mock Requests (see Intercept HTTP Requests). TestCafe attaches global request hooks to your entire test suite.

The next example declares a global request hook. This hook attaches the same RequestMock to all tests.

const { RequestMock } = require('testcafe');

const mock = RequestMock()
    .onRequestTo('https://api.mycorp.com/users/id/135865')
    .respond({
        name: 'John Hearts',
        position: 'CTO',
    }, 200, { 'access-control-allow-origin': '*' })
    .onRequestTo(/internal.mycorp.com/)
    .respond(null, 404);

module.exports = {
    hooks: {
        request: mock,
    },
};

See Intercept HTTP Requests for additional examples of request hooks.