Benchmark Selenium Test Executions on Machines with Different OS and CPU

Choose cost-effective build machines for executing automated tests

Zhimin Zhan

--

A repost of my daughter’s article with permission and slight modification. I added it here for the convenience of my blog subscribers.

Execution speed is an important factor in Automated Functional UI Testing, especially for a large regression suite (e.g. 200+ user story level tests, Level 3 of AgileWay CT Grading). While we can run the tests in a Continuous Testing server that supports real (not in test automation framework, which is bad) parallel test execution (like BuildWise), it is still wise to choose a fast, reliable, and affordable build machine for executing automated tests.

In this article, I will benchmark test executions of the same Selenium WebDriver test in the following build machines (in my testing lab):

  • MacMini 2012 — i7
  • iMac 2015 — i5
  • MacMini 2020 — M1 (the base model)
  • Windows 10 — in a VM inside iMac 2015
  • Linux Mint — in a VM inside iMac 2015

Test Case: a Selenium WebDriver test in RSpec. This is an end-to-end test for WhenWise covering a wide range of web elements in a typical modern web app.

--

--

No responses yet