When recording test actions, reflow collects large numbers of data points on the underlying element that's been interacted with.
When replaying a test action, reflow will intelligently look for the appropriate element to interact with by combining these data points, using multiple strategies together to beat non-determinism. It will:
With this process, Reflow is highly likely to always select the same element between tests on an arbitrary website. If there are issues, then high specificity attributes are recommended to be added to the element or its parents to help reflow always select the same one.
In order, reflow scores specificity roughly as follows in the table below (lower score means higher specificity). This scoring is applied recursively from an element until the root of its DOM/Shadow DOM node.
nth-child is used to combine parent and child selectors, should it not be a uniquely identified subelement when combined with a parent selector.
|First 80 characters of element inner text||30|
|Element Node Name||200|
|Minimal Combined Class Name that's unique in page||200|
|Ignore element and evaluate all parent selectors instead||1000 + SUM(parent selector)|
|Wildcard Fallback (any element)||10000000|
When the web page updates in such a way that the element changes, the new element selector hierarchy is collected during the test run using the specificity rules above. Should the test succeed, subsequent tests will use the most recent successful run to guide element selection.
Due to this, as long as changes are relatively small over time, the test will auto-heal interactions where a manually written test would require additional work to fix.
For example, if a software release changed an anchor node (link) into a button, but one of the following is true: [kept in the same element hierarchy, kept the same text, kept the same [data-test-id] etc], the test would continue working, and update itself appropriately without user interaction.
If the auto-healing process does not work, the test can be debugged in the edit screen, by clicking "Play" then waiting until the error is replicated. At this point, delete the original step and re-record it, then save the test (or press "Play" again to fully validate the test).