Skip to main content

Thrown into automation

Situation & Problem

I was thrown into an automation test project.

Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.

The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.

The problems were not so appealing at the start, but I would formulate then as:
  • Maintenance of the scripts took too long
    • By new versions of the application, it took some time to adjust the scripts to the changes
    • This caused delay in information flow from testers to managers & developers
    • The changes in the application were not clearly communicated to testers
  • Testing was purely through automation scripts, covering regression (relatively poorly ~45% code coverage)
    • There was no exploratory testing, retesting of bugs, performance testing...
  • Scripts were dependent on coordinates and simulated keystrokes 
    • Every small change in positions of windows or controls caused the scripts to stuck
    • We were missing direct control handling - for example through ID's
  • There were too few control points
    • The script executed mostly blindly for very long time
    • It could often happen that the script did some misclicks and then went rampart - opening programs, writing in them and so on...
  • Oracles were weak
    • We had no specifications or requirements
    • The task description for the developers work was often too short and unclear (we missed quite a lot of information flow)
    • There was no written documentation on the script execution
    • The manuals were either not available, or written mostly for business purposes and gave not much information for other parts
    • Feedback from business about calculation data was slow
  • To sum it up: Testing was underfed 

Journey to the solution

First of all, the solutions to the problems were often a product of a team, I don't want to take solely credit for them.

 

Collaboration

At the start, the most appealing weakness was the maintenance issue. One of the reasons was lack of communication between testing and development.

To mitigate this issue, we decided to make a regular Demo sessions, where the developers introduce changes to the application, providing a possibility for the tester to prepare for them and to plan the tests with more overview. Demo sessions were organized every second week. There was a need to fuel the interest, because the sessions were unintentionally forgotten or ignored.

Another profit from the collaboration with developers is an increase of testability of the application. The automation tool (AutoIt) lacks often the capacity to check various inputs and basically to "see" the current state of the application. These issues were partially solved by custom functionality that provided the automation tool with these "eyes", or with the ability to browse through items or tabs with help of command inputs (not only mouse clicks).

 

Reporting

It is important for the managers to stay informed on the work you are doing. Thats why we started to send a weekly summary report every Friday (or the last workday) on top of the weekly meetings.

 

Adding up

After the actual testing and maintenance work, there was a lot of work with increasing the coverage and the testing possibilities on the project. Starting with covering more from the actual AUT's (applications under test), continuing with test automation for other applications and performance testing for one particular part, there was still need to provide something more. I would say big benefits came from

 

Exploratory testing

The automation so far touched purely the regression part. There was no room to provide testing for the new features. Here came the need to cover this. Exploratory approach was the most obvious one. To differentiate this approach from "random clicking", we needed to pay attention to
  • Planning
    • There is a clear need to gather information about what areas are our target with this approach
    • Gather as many information about these areas - Demo sessions, asking, manuals, task descriptions
  • Reporting
    • Information about what we cover with this approach
    • Information about issues found through it - mentioning in the bugs that they were found thanks to exploratory testing
  • Evaluating
    • Evaluate and analyze the exploration for enhancement of future runs

This approach proved as a worthy addition to the present automation on the project, both in defects found and also information about the AUT gathered.

 

Requirements

There is a requirements process starting to improve information flow about the changes in AUT,  but it is in its beginnings, I hope the best.

 

New tools

Current tool on the project is quite outdated. This part provides a big potential to improve. We have started to look for alternatives. however a change in automation tool would cause big workload with overwriting the current coverage and overcoming problems that could emerge.

The advantages of switching to a new tool would not come fast, but in the mid and long term, there would be
  • Increase of reliability of runs
  • More coverage
  • Quicker production of new scripts
  • More possibilities what to do with the scripts
  • Greater clarity of runs, reports ...

Conclusion

Pure automation is in my opinion rarely enough to cover the testing needs of any project. You can put automation in the middle and build on it, but you should never really rely only on it. With every test that you automatize, you take the dynamic sapient part from it and you must keep that in mind.

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Excellent post Michal! Some of the issues that you mentioned are common to many test automation projects and you offered some great solutions.
    I would recommend considering Applitools Eyes as an additional layer of automation to reduce some of the maintenance overhead, increase the automation coverage and improve team collaboration.

    ReplyDelete
  3. Nice and very informative post..

    ReplyDelete

Post a Comment

Popular posts from this blog

When to start automation?

If you are asking this as a tester, you probably asking too late. Automation is something that can save you some portion of your work (understand resources for your client) and i rarely found cases of testing work that did not need at least some portion of automation. I know that it is rarely understood that automation is something to be developed & maintained and if you cover enough of the application, you do not need any more regression - well i do not think that somebody has done an automation regression suite that if fully reliable (i am not speaking about maintaining this code - which is another topic). There can be always a bug (or quality issue) that slips through, even when you scripts go through the afflicted part. I understand that many testers have no development background or skills, but i doubt the developers that could help you are far away. I am not assuming that they can do the scripts for you.... However if they understand what you need, they can say how e...

Testing impact on security

... or the impact when testing is lacking? Security breaches , hacks , exploits , major ransomware attacks - their frequency seem to increase recently. These can result in financial, credibility and data loss, and increasingly the endangerment of human lives. I don't want to propose that testing will always prevent these situations. There were probably testers present (and I'm sure often also security testers) when such systems were created. I think that there was simply a general lack of risk-awareness on these projects. There are many tools and techniques from  a pure technical point of view to harden the software in security context. Some of them have automated scans which crawl through your website and might discover the low hanging fruits of security weaknesses ( ZAP , Burpsuite ...), without much technical knowledge from the person operating it. The more important aspect is however the mindset with which you approach the product. The tester is often the f...

RST Explored - My experience

My experience report from my recent RST Class I attended the RST class after a while, wanting to refresh my knowledge about the RST view on testing. It was a 4-day event, each day 3 Sessions, approx 4hour/day. My general impression was that it enriched and refreshed my understanding of testing.   Each of the four days had an central theme Day 1: "It is possible to test everything?" Day2: "When to stop testing? How to test from specifications." Day3: "Product coverage outline. Complexity of the system" Day4: "Risk analysis and coverage"   Going deeper into the topics of each day would be impossible without spoilers, I will therefore rather focus on my impressions and what this training has brought me. The way Michael was guiding us through the class was very engaging, although we usually started with a short lecture, questions and remarks were encouraged from start and we had an shared review after each exercise - students explaining their work,...