This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Conditions & Expectations

Conditions & Expectations

The pairing of Conditions and Expectations becomes the core evaluative logic of Qadenz. These classes are used to determine if the state of the UI under test meets a given criteria.

The goal of the Condition/Expectation pairing is to provide a unified structure for making evaluations throughout a testing project, rather than mixing Selenium ExpectedConditions calls for Explicit Waits, and various syntax patterns for test assertions. The resulting code is simple to read and quickly understand, and is easily maintained as the application under test evolves.

The evaluative logic behind Conditions and Expectations aims to answer the questions, “What is the current state of the UI?” and “What do I expect to see?”. Conveyed in terms familiar to testers, “What is the ACTUAL outcome?” and “What is the EXPECTED outcome?”.

A Condition describes a specific criteria to be evaluated on the UI. This could be the visibility of one or more elements, or the text shown in an element, for example. The Condition is used to establish the “actual outcome” portion of the evaluation. An Expectation, then, is required for each Condition. The Expectation will describe the “expected outcome” portion of the evaluation.

How does it work?

Each Condition uses WebDriver commands to retrieve data from, or information about, elements on the UI. Each Expectation invokes a Hamcrest Matcher that is used for the evaluation, which is passed to the Condition. If the the value retrieved by the Condition matches the value given on the Expectation, the Condition result will return TRUE.

Logging of evaluations is achieved by combining a description of the evaluation on the Condition with a description of the expected outcome on the Expectation. If an evaluation should fail and the Condition result return FALSE, additional information will be provided on the logs to illustrate the cause of the failure. This typically equates to capturing the “actual” value that did not meet the “expected” value.

1 - Validations

Validations

Unit testing frameworks such as TestNG or JUnit include assertion functionality as a core component, and are relatively simple to use. Being open-ended frameworks, however, individual users may tend to express very similar validations in a variety of different assertions. This leads to inconsistent coding patterns, and more difficult maintenance of test code.

Using Conditions and Expectations allows a team to ensure all contributors are following the same pattern for validations.

Conditions.textOfElement(greetingText, Expectations.isEqualTo("Hello World!");

That said, Qadenz does employ a single TestNG assertion, the assertTrue() method, as a means of validating a Condition. The result() of a Condition is a simple representation of whether the state of the UI under test meets expectation. If the output of the Condition evaluation matches the Expectation, result() will return true.

By passing this result to the assertTrue() method, Qadenz is ensuring that a passing result depends on the Condition evaluation meeting the Expectation. If not, the validation will fail.

Assertion Types

The concept of Hard Assertions and Soft Assertions are not new in the test automation world. Qadenz implements both concepts by way of the verify() and check() methods.

verify() represents a Hard Assertion. If the validation fails, the test will be marked as failed and execution will be stopped.

check() represents a Soft Assertion. If the validation fails, the test will be marked as failed, but execution will be allowed to continue until a call to Assertions.flush() is made, which will stop execution of the test if any failures have been encountered.

The verify() or check() methods are available on the Commands Hierarchy and are callable on any descendant class of Commands. The mechanics of using these validations are the same, with the only difference being an additional step with check() required to call Assertions.flush() in order to handle any failed Soft Assertions and stop execution.

Grouped Conditions

Validations in Qadenz are further enhanced with the ability to evaluate multiple Conditions as a group. In scenarios where a single UI action can trigger multiple verification points in a test, a tester may have to express multiple assert statements to ensure necessary coverage. If, for example, the first assertion were to fail, the remaining assertions would remain unchecked until either the UI under test is fixed, or the test scenario is executed manually.

Using Qadenz, a tester is able to execute these same validations in one call to verify() or check(), and will receive results for each Condition evaluation regardless of individual results. If again, the first validation fails, Qadenz will perform handling tasks on the failure, then proceed to evaluate each of the other Conditions that were passed. In the case of a verify() with multiple Conditions where one or more have failed, halting of test execution will be delayed until all Conditions have been evaluated, which will ensure that the test step is completed in its entirety.

In the example below, a user has added an item to the shopping cart, and the next step will verify a snackbar notification is displayed with a confirmation message, the item quantity is shown on the shopping cart icon, and the ‘Checkout Now’ button is enabled.

commander.verify(
        Conditions.textOfElement(snackBarNotification, Expectations.isEqualTo("Items added successfully!")),
        Conditions.textOfElement(quantityInCartIndicator, Expectations.isEqualTo("1")),
        Conditions.enabledStateOfElement(checkOutNowButton, Expectations.isTrue()));

By grouping these verifications together, even if one (or more) Conditions fail, all will be evaluated and reported individually.

Managing Soft Assertions

The check() methods works alongside the static Assertions.flush() method to delay execution stoppages in the event of failed validations. As calls to check() are made and executed through the course of a test, the Assertions class tracks whether any failures have been encountered. When the call to Assertions.flush() is made, this tracker is checked. If any failures are present, execution will be stopped. If no failures are found, execution continues.

Since the tracker is live for the entire duration of a test, there is no limit to how many calls to Assertions.flush() can be made throughout a test. It is possible then, to create a series of “checkpoints” in longer tests whenever it is deemed sensible to stop a test if failures have been found. This is especially convenient for smoke to end-to-end tests where a focus on completion of test is important for a full accounting of key validation points.

Please note, however, that at least one call to Assertions.flush() is required in tests where only check() validations are made. If no call is made, the test will be allowed to continue to completion, and individual steps will be reported as failed (if validations have indeed failed), but the test as a whole will be reported as passing. Since the Qadenz reporter is integrated with TestNG, the AssertionError thrown by the flush() method in the event of individual failures is required to mark the test itself as failed.

One additional design consideration must be made when mixing verify() and check() validations within the same test. When a check() validation is made, and is followed by a verify() validation prior to calling Assertions.flush(), if the verify() validation fails, the test will be stopped at the failed verify() validation.

Screenshots

Qadenz validations are built to capture screenshots whenever a Condition evaluation fails. If screenshots are desired for validation failures, no special action need be taken. Should screenshots not be needed for a validation, disabling is easy with the overloaded verify() and check() methods.

Adding a call to Screenshot.SKIP as the first argument in either verify() or check() will disable screenshots from being captured if the evaluations for any accompanying Conditions fail.

verify(Screenshot.SKIP, Conditions.visibilityOfElement(locator, Expectations.isTrue());

A boolean could also be passed to achieve the same outcome. The Screenshot.SKIP value is intended as a means to keep the resulting code easily readable at a glance.

2 - Waits

Waits

Selenium provides both Implicit and Explicit Wait types, and Java provides the Thread.sleep(). While all technically valid, each have their own advantages and disadvantages. Qadenz does not implement the WebDriver Implicit Wait. The Implicit Wait can serve as a basic catch-all wait approach in simple projects, but the flexibility is limited, and more importantly, it tends to not pair well with Explicit Waits. Using both in conjunction can cause some very unexpected side effects.

Qadenz opts for the Explicit Wait as the primary UI synchronization approach, and pairs this concept with Conditions and Expectations to define the criteria for the syncronization.

What about ExpectedConditions?

The ExpectedConditions class is well known among automation engineers and provides a wide variety of wait-conditions to handle timing and synchronization. In his 2017 Selenium State of the Union, Simon Stewart calls ExpectedConditions a “useful dumping ground for functionality” that “brutally violates this attempt to be concise”. While there is no denying the usefulness of a class such as ExpectedConditions, it could also be said that the method options aren’t always intuitive for choosing an ideal fit. In that same talk, Mr. Stewart uses the original intent and the evolution of ExpectedConditions as an example of why it’s important that developers not punish themselves too harshly for code written in the past.

Conditions and Expectations are implemented in Qadenz for waits with the intent of making the invocations of wait-conditions much more concise and exact, thus enhancing the readability (and maintainability) in test code, as well as improving the clarity and precision of logging output that is captured and presented on the final reports.

Invoking a Wait

Invoking a Wait is as simple as calling the pause() command, and passing an appropriate Condition/Expectation pairing.

If an application under test displays a confirmation banner when an item is added to a shopping cart that blocks access to the navigation menu, for example, the test would benefit from pausing execution until the banner confirmation disappears after a few seconds.

commander.pause(Conditions.visibilityOfElement(addItemConfirmation, Expectations.isFalse()));

Unlike the validation functionality that consumes the Condition and Expectation pairing, the pause() method does not provide the ability to pass multiple Conditions as a group for a wait. This is done in the mindset that waits should be used as sparingly as possible to avoid unnecessary lenthening of execution times.

I want my MTV ExpectedConditions

While Qadenz does implement certain functionality in an opinionated manner, there is no reason to prevent access to the underlying tools for use in a customized solution within a consuming test project. By extending the WebCommander, it would be very possible to create an instance of the WebDriverWait and pass an ExpectedCondition to execute a wait. The recommended practice would be to use this approach only if a suitable Condition does not exist for the needed wait.