Situation & Problem
I was thrown into an automation test project.
Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.
The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.
The problems were not so appealing at the start, but I would formulate then as:
Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.
The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.
The problems were not so appealing at the start, but I would formulate then as:
- Maintenance of the scripts took too long
- By new versions of the application, it took some time to adjust the scripts to the changes
- This caused delay in information flow from testers to managers & developers
- The changes in the application were not clearly communicated to testers
- Testing was purely through automation scripts, covering regression (relatively poorly ~45% code coverage)
- There was no exploratory testing, retesting of bugs, performance testing...
- Scripts were dependent on coordinates and simulated keystrokes
- Every small change in positions of windows or controls caused the scripts to stuck
- We were missing direct control handling - for example through ID's
- There were too few control points
- The script executed mostly blindly for very long time
- It could often happen that the script did some misclicks and then went rampart - opening programs, writing in them and so on...
- Oracles were weak
- We had no specifications or requirements
- The task description for the developers work was often too short and unclear (we missed quite a lot of information flow)
- There was no written documentation on the script execution
- The manuals were either not available, or written mostly for business purposes and gave not much information for other parts
- Feedback from business about calculation data was slow
- To sum it up: Testing was underfed
Journey to the solution
First of all, the solutions to the problems were often a product of a team, I don't want to take solely credit for them.
Collaboration
At the start, the most appealing weakness was the maintenance issue. One of the reasons was lack of communication between testing and development.
To mitigate this issue, we decided to make a regular Demo sessions, where the developers introduce changes to the application, providing a possibility for the tester to prepare for them and to plan the tests with more overview. Demo sessions were organized every second week. There was a need to fuel the interest, because the sessions were unintentionally forgotten or ignored.
Another profit from the collaboration with developers is an increase of testability of the application. The automation tool (AutoIt) lacks often the capacity to check various inputs and basically to "see" the current state of the application. These issues were partially solved by custom functionality that provided the automation tool with these "eyes", or with the ability to browse through items or tabs with help of command inputs (not only mouse clicks).
To mitigate this issue, we decided to make a regular Demo sessions, where the developers introduce changes to the application, providing a possibility for the tester to prepare for them and to plan the tests with more overview. Demo sessions were organized every second week. There was a need to fuel the interest, because the sessions were unintentionally forgotten or ignored.
Another profit from the collaboration with developers is an increase of testability of the application. The automation tool (AutoIt) lacks often the capacity to check various inputs and basically to "see" the current state of the application. These issues were partially solved by custom functionality that provided the automation tool with these "eyes", or with the ability to browse through items or tabs with help of command inputs (not only mouse clicks).
Reporting
It is important for the managers to stay informed on the work you are doing. Thats why we started to send a weekly summary report every Friday (or the last workday) on top of the weekly meetings.
Adding up
After the actual testing and maintenance work, there was a lot of work with increasing the coverage and the testing possibilities on the project. Starting with covering more from the actual AUT's (applications under test), continuing with test automation for other applications and performance testing for one particular part, there was still need to provide something more. I would say big benefits came from
Exploratory testing
The automation so far touched purely the regression part. There was no room to provide testing for the new features. Here came the need to cover this. Exploratory approach was the most obvious one. To differentiate this approach from "random clicking", we needed to pay attention to
-
Planning
- There is a clear need to gather information about what areas are our target with this approach
- Gather as many information about these areas - Demo sessions, asking, manuals, task descriptions
-
Reporting
- Information about what we cover with this approach
- Information about issues found through it - mentioning in the bugs that they were found thanks to exploratory testing
-
Evaluating
- Evaluate and analyze the exploration for enhancement of future runs
This approach proved as a worthy addition to the present automation on the project, both in defects found and also information about the AUT gathered.
Requirements
There is a requirements process starting to improve information flow about the changes in AUT, but it is in its beginnings, I hope the best.
New tools
Current tool on the project is quite outdated. This part provides a big potential to improve. We have started to look for alternatives. however a change in automation tool would cause big workload with overwriting the current coverage and overcoming problems that could emerge.
The advantages of switching to a new tool would not come fast, but in the mid and long term, there would be
The advantages of switching to a new tool would not come fast, but in the mid and long term, there would be
- Increase of reliability of runs
- More coverage
- Quicker production of new scripts
- More possibilities what to do with the scripts
- Greater clarity of runs, reports ...
Conclusion
Pure automation is in my opinion rarely enough to cover the testing needs of any project. You can put automation in the middle and build on it, but you should never really rely only on it. With every test that you automatize, you take the dynamic sapient part from it and you must keep that in mind.
This comment has been removed by the author.
ReplyDeleteExcellent post Michal! Some of the issues that you mentioned are common to many test automation projects and you offered some great solutions.
ReplyDeleteI would recommend considering Applitools Eyes as an additional layer of automation to reduce some of the maintenance overhead, increase the automation coverage and improve team collaboration.
Nice and very informative post..
ReplyDelete