Skip to content

How We Improve the Reliability of Cypress Tests

Introduction

We have a project in the health and research area where the main functionality of the client is to submit forms with several fields (more than 100) in several different ways — selectors, multi-selectors, open fields, and checkboxes. As the form is extensive, full of shared components, good automation is necessary to ensure the security and quality of our development and deliveries.

This project has plenty of unit tests and the e2e tests (using Cypress) were flawed, with many false positives/negatives. As a result, automation was not adequate for our client's needs — that's where we come in.

We needed to revisit automation (unit and e2e), understand the points that could be improved, and try to bring that confidence to the project and the team.

It’s important to highlight at this point that the need to improve our client’s automation does not mean the project and tests were wrong or anything like that. We must remember that the existing automation setup was put in place under certain conditions, with a particular level of knowledge of the people at that time, and with logical explanations.

In this article, we'll discuss some best practices for evolving a test project in Cypress (let's put aside the discussion of unit tests for now ????).

How did we start?

Our main goal here is to make Cypress tests reliable enough to run in any environment (Local, Dev, QA, or Prod). To achieve this, we conducted a thorough analysis of the project, examining the code, pipeline testing methodology, and recurring issues. We found the following points:

  • Certain tests were producing false negatives both locally and in the pipeline, often due to delayed rendering of information in React fields.
  • The login was done in each of the test suites and failed at times, generating errors in the tests. Furthermore, we had to use Puppeteer to run it, as the version of Cypress we used did not allow access from more than one domain in the URL.
  • The code lacked an established design pattern, such as Page Objects, which made it challenging to maintain elements and perform actions during testing.
  • We had several duplicate tests, which had the same flow of actions and validations, in different files. That is, we had a stack of 60 tests, but, in fact, we had fewer.
  • The execution time of the tests as a whole was around 35 minutes due to the way we log in and how we used the before and beforeEach features.

Below is an example of one of the *.spec.ts files we had. However, we had to change the name of some variables and selectors for compliance reasons.

JavaScript
/* eslint-disable jest/valid-expect-in-promise */
describe('something page', () => {
  beforeEach(() => {
    cy.loginAsInternalUser()
    cy.intercept({
      method: 'GET',
      url: '/api/something/*',
    }).as('something')
    cy.selectSponsor('/something', '216271')
    cy.wait('@something')
  })
  it('should successfully land on the something', () => {
    cy.getBySel('welcomeHeader').should('contain', 'Something')
  })
  it('should have the same customer for sub-tests', () => {
    cy.getBySel('welcomeHeader').should('contain', 'Something')
  })
  it('should show list after selecting a customer', () => {
    cy.getBySel('accordion-item').should('have.length', 7)
  })
  it('should create a short list related to a filter1', () => {
    cy.getBySel('searchable-select-filter1').find('button').click()
    cy.get('div[role="option"]').first().click()
    cy.getBySel('row').should('have.length', 5)
  })
  it('should have the first register expanded by default', () => {
    cy.getBySel('accordion-button')
      .first()
      .should('have.attr', 'aria-expanded', 'true')
  })
  it('should show filters open as default', () => {
    cy.getBySel('searchable-select-filter1').should('exist')
  })
  it('filters should persist from overview to view', () => {
    cy.getBySel('searchable-select-filter1').find('button').click()
    cy.get('div[role="option"]').first().click()
    cy.getBySel('filter-tag-wrapper').then(($tag) => {
      const tag = $tag.first().text()
      cy.getBySel('viewButton').first().click()
      cy.getBySel('filter-tag-wrapper').contains(tag)
    })
  })
  it('filters should reset when the reset button is clicked', () => {
    cy.getBySel('searchable-select-filter1').find('button').click()
    cy.get('div[role="option"]').first().click()
    cy.getBySel('filter-tag-wrapper').should('exist')
    cy.getBySel('reset-button').click()
    cy.getBySel('filter-tag-wrapper').should('not.exist')
  })
  it('unreported filters should retain all statuses regardless of context', () => {
    cy.getBySel('accordion-item')
      .should('have.length', 7)
      .then(() => {
        cy.url().then((urlToCompare) => {
          cy.log(urlToCompare)
          cy.getBySel('viewButton').first().click()
          cy.getBySel('searchable-select-filter3').find('button').click()
          cy.get('div[role="option"]').first().click()
          cy.getBySel('filter-tag-wrapper').should('exist')
          cy.getBySel('reset-button').click()
          cy.getBySel('filter-tag-wrapper').should('not.exist')
          cy.getBySel('back-icon').click()
        })
      })
  })
})

This is the structure of the folders and files we had.

Discussions and roadmap

After establishing the objectives of our tests in Cypress and evaluating what we had, we drew up an action plan of the activities that we should take. These activities would run in parallel with our development and testing activities in the software development life cycle (SDLC).

Some activities were planned for the sprint; others we added when we had some time 'free'. All of them were discussed with the team and their feedback was taken on board. We then drew up a plan that made sense for everyone.

What we thought at first was:

  • Check test duplication and make tests unique
  • Improve the way we log in
  • Update the Cypress version
  • Refactor the code by applying a design pattern
  • Increase test coverage
  • Add tags to the tests
  • Generate test reports
  • Add the tests after deployment in the Dev/QA environments
  • Add the tests after deployment in the Prod environment

Code time!

Cypress version from 9.7.0 to 11.1.0

With this update, we enabled some new features that would help us in various ways. The main one was the use of cy.session and cy.origin , where it is possible to log in using Cypress and keep our user token for all scenarios and access different domains — since our login works through PingOne and Microsoft authentication.

The first logic we had was to always log in with Puppeteer before any spec (test file) and store some information (token and cookies) using manual code. With the improvement in Cypress, we started using:

  • cy.origin: this helps access different domains.
  • cypress-localstorage-commands library: this maintains cookies.
  • cy.session: this checks the session according to the test user used.

This meant we were able to eliminate the dependency on Puppeteer — this is important because Puppeteer generates an extra complexity in the application and understanding of the code. We also significantly reduced the time we spent logging into each spec as a result of using a native Cypress solution.

Code refactoring with Page Objects

The use of Page Objects was key to improving code readability and making it easier to find where test errors occur. With the correct separation of responsibilities, it is much easier to maintain a code and understand what you are doing. In addition to this, it was much easier to create new scenarios, as it was easy to reuse code.

With those changes, we saw a significant improvement in the time of our tests — it previously took 35 minutes to run 60 tests, and we were now able to run 107 tests in just 18 minutes. The code below shows the changes we made. However, we had to change the name of some variables and selectors for compliance reasons:

JavaScript
import GuidedTour from '../components/guidedTour'
import { BasePage } from '../base/base.page'

export default class Something extends BasePage {
  private readonly accordion: string
  private readonly backBtn: string
  private readonly filter1: string
  ...

  constructor() {
    super()
    this.accordion = 'accordion-button'
    this.backBtn = 'back-icon'
    this.filter1 = 'searchable-select-1'
    ...
  }

  get getAccordion(): Cypress.Chainable<unknown> {
    return cy.getBySel(this.accordion)
  }

  get getBackBtn(): Cypress.Chainable<unknown> {
    return cy.getBySel(this.backBtn)
  }

  get getFilter1(): Cypress.Chainable<unknown> {
    return cy.getBySel(this.filter1)
  }
  ...

  validateFilterOnTable(
    elementFilter: Cypress.Chainable<unknown>,
    elementTable: Cypress.Chainable<unknown>
  ): void {
    elementFilter
      .invoke('val')
      .then(value =>
        elementTable.each($el => cy.wrap($el).should('contain.text', value))
      )
  }

  visit(): void {
    cy.intercept({
      method: 'GET',
      url: '/api/something'
    }).as('something')
    cy.visit(this.url, { failOnStatusCode: false }).wait('@something')
  }
}

Use of cy.intercept

While doing all these tests and increasing the coverage of scenarios, we started to notice some bugs that occurred with significant frequency.

We understood that some React elements took a while to render compared to the performance speed of Cypress. So, we identified which elements had this ‘problem’, applied some cy.intercept on the requests related to the fields, and added a wait on those requests (an example can be seen in the visit() method in the code above).

Use of cypress-slow-down library

To improve the accuracy of our tests, we also added a library that makes the Cypress actions slower and simulates human behaviour more closely for these specific elements or flows.

With these changes made, we managed to make the tests work perfectly. The tests did take an average of 5 seconds more to complete, but it was a low cost to pay to have accurate tests (an example can be seen in the test 'should open guided tour and step through the tour’ in the code above).

Change/separate folder structure

Finally, we saw that keeping the tests in Cypress along with our frontend project was not a good idea, given that we need the backend and the database to run them. As we use the mono-repo strategy for the project, we just migrated the tests to a separate project. It also made the maintenance and evolution of our code a lot easier.

The new structure of the folders and files.

Next steps

At the time of writing, we are refactoring the code to further improve the pattern we established.

As the tests were stable and the team was happy with the ease of creating new scenarios, we thought of evolving the design to make it even more intuitive.

We are following a pattern that we discussed within NearForm, one that we believe very precisely applies the application of principles such as SOLID and DRY.

After these changes, we will start implementing the tags together with the addition of tests at the end of the deployment in the Dev and QA environments.

The use of tags will bring flexibility to run the tests the way we think is best for each situation and increase the quality of our deliveries.

Using tags with cypress-grep library:

JavaScript
it('should successfully land on the something page', 
  { tags: '@production' }, 
() => {
    cy.url().should('include', page.url)
    page.getTable.should('be.visible')

    page.getAccordion.first().should('have.attr', 'aria-expanded', 'true')

    page.getFilterBtn.should('exist')
    page.getFilter1.should('be.visible')
    page.getFilter2.should('be.visible')
})

Another improvement we intend to make is to create reports of the tests that we are doing. This is to make it clearer to other stakeholders what we are testing, in terms of monitoring quality, compliance, and safety. We did a POC with the mochawesome library and we had good results, it works very well, and it brings accurate information from the tests. We still have to think about how we are going to show/send these reports and which tests people want to follow.

The importance of what we did

In our scenario, where we have a frontend, backend, and database project talking all the time, having an E2E test that goes through all the systems, validating several layers and functionalities at the same time is very important for the security of our development.

During the development of new functionalities, it is possible to verify the changes locally to ensure that the work completed did not generate a new bug. After deployments in any environment, we will check that configurations are correct and flows are working as they should, in addition to minimally verifying integrations with other systems that we do not control.

Our work means that all this can and will be done automatically, without having to spend time remembering scenarios that are specific and generate a lot of problems for our users. By implementing these measures, the team and the client can be more confident in the testing process, leading to increased accuracy and security throughout the entire development workflow.

Thanks to our latest improvements, our testing framework has become more reliable than ever. We've eliminated false positives and negatives, and we can now run tests in any environment (except production, for now).

This boost in confidence has had a positive impact on our security during development and manual testing. With many common cases and integrations properly covered, we can focus on testing alternative scenarios and edge cases, reducing the time spent on manual testing.

Furthermore, these improvements have had secondary effects, such as reducing costs associated with existing pipelines and resources. By running more tests locally and with faster test results, we can provide quick feedback to developers, resulting in general savings for the team. With these improvements, we can find and fix bugs in the early stages of development, instead of encountering problems in production environments.

In addition, these changes have improved scalability and maintainability. Through refactoring, creating new test scenarios and maintaining existing ones has become much easier. Even people who are not as knowledgeable about the code can quickly find the relevant elements, pages, and tests.

The benefits of these improvements are countless, and these are just a few examples of the gains our team has achieved during this period. Overall, these changes have been a major success!

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.

Contact