It’s without a doubt times have changed. Majority of organizations, both enterprises, and small businesses, are shifting towards DevOps practices with the following objectives in mind – deliver software fast, continuously, and with top-notch quality.
To deliver software fast, many activities throughout the development and delivery pipeline (DevOps), must be automated to the maximum degree. The key reason behind the need to automate is the continuous delivery cadence. Having an automated “machine” that can validate your software upon each code change and provide insights back to the developer is key to success.
While having an automated process for continuous testing in place is great, what the market is starting to acknowledge and explore, is the use of machine-learning capabilities as part of the entire software delivery process.
To understand the role and benefits of ML in a DevOps reality, let’s zoom into the automated testing processes for web applications.
Most teams today both Dev and Test, use Selenium WebDriver to automate their functional testing for their websites (responsive, progressive, other). With that in mind, the scale of web test automation grows in an exponential way due to the DevOps reality, where scope if added to the web product on a weekly, Bi-Weekly basis, together with new functional tests.
The growth in test data is one aspect of the story, however, the growth in the number of platforms to be covered also grows due to the digital transformation, and the multi-screen usage of mobile and desktop web.
To address both the growing scale of tests, platforms, and the pace of innovation, teams seek a solution that can automatically dig into test data, quality and usage patterns, and recommend wisely to the developers and testers ways to optimize their entire pipeline activities.
Machine-Learning (ML) is one way of addressing the complexity of web DevOps processes. With such tools, teams can get clear and high-confidence insights from their entire testing processes, and act upon them. An additional angle to ML in testing is the ability to develop test automation code with minimal to zero written code and to continuously maintain that code as things changes throughout the product lifecycle.
Machine-Learning and Web Testing
Specifically, with selenium for functional web test automation, teams struggle in various areas:
- Test flakiness and website object management
- Test code maintenance as the product evolves
- Desktop Platform and Code coverage analysis
- Generic reporting and release decision making
If we investigate the above topics, each one on its own is a huge time-consuming activity that when done as a recurring practice each sprint, adds up to a huge bottleneck.
Test Flakiness & Object Management
When a test case is bad by design, or it uses a wrong object identifier (flaky XPath, etc.), it carries a huge quality debt per each job within the continuous integration (CI) or the regression cycles.
Having an automated machine/engine that scans through the test cases and the objects and can identify and hopefully resolve such issues is a huge advantage to the entire process.
Test Code Maintenance
Ongoing test code development and proper object allocation is one thing, but what happens when new functionality is added to the web product? Such changes automatically translate into new test code that needs to be added, and new objects or elements added to the site. A machine learning engine that continuously scans your website, would quickly highlight the changes, identify the gaps and either develop for you black-box exploratory tests with the relevant objects identifiers. This capability can be again a huge enabler to continuous testing.
Platform Coverage for Mobile and Desktop Web
While being able to optimize through ML the test code and objects associated with the tests, knowing which platforms to test against as the market evolves is a pain on its own. With ML solutions, teams can figure out the most error-prone platforms (e.g. Windows 8.1/Firefox), and guide the testing teams to focus more on these platforms, and not waste “time” on the more robust platforms. This capability is mostly related to and should be well aligned and monitored continuously by the teams since the stability of platforms changes from one mobile OS/Desktop browser OS release to the other.
Generic Reporting and Release Decision Making
At the end of each sprint in a DevOps process, the management team needs to take a simple Go/No-Go decision around releasing the product or based on quality issues, delay it.
Having the entire pipeline visibility from a quality standpoint at any given time and tied to the above pillars of robust and reliable testing on the “right” platforms is the last step in a mature ML for DevOps practitioner.
A ML-based test report that aggregates all test data for all supported platforms for your websites, gives high-confidence and data-driven decision-making abilities to the teams.
DevOps is not a trend but a living reality for most organizations and being able to optimize the activities associated with releasing software in such reality can dramatically enhance both the release cadence, as well as the release quality on a continuous manner. With that in mind, note that ML tools are also undergoing changes, and are being invested to meet the DevOps needs, therefore – it is important to understand what works for your team whether what is still less mature and move forward accordingly.