... or the impact when testing is lacking?
Security breaches, hacks, exploits, major ransomware attacks - their frequency
seem to increase recently. These can result in financial, credibility and data
loss, and increasingly the endangerment of human lives.
I don't want to propose that testing will always prevent these situations.
There were probably testers present (and I'm sure often also security testers) when
such systems were created. I think that there was simply a general lack of
risk-awareness on these projects.
This methodology is in many aspects similar to the exploratory testing mindset. Both try to learn about the system, exploratory is more general, threat modelling has a more specific scope. I admit that I never yet done threat modelling professionally. However from testing numerous systems it is clear that something similar as threat modeling appears in the mind of a tester. When testing a system, after creating and continually refining a model in my head, I often ask myself where the places that might be exploited are (the “attack vectors”). There are often security defects which can be found without any penetration testing experience - buggy password prompts (revealing information or allowing unauthorized entry), data leaks, unencrypted storage of sensitive data, etc.
Usually these are not identified thanks to any specification or predefined test case (when not focusing especially on security aspects), you just need to look for these loose ends when you navigate through the system.
TL;DR
Any person with the right mindset can make a difference in software security. In absence of a security specialist, this person is often your software tester, I hope you have one;)
Visit us for more info
Security breaches, hacks, exploits, major ransomware attacks - their frequency
seem to increase recently. These can result in financial, credibility and data
loss, and increasingly the endangerment of human lives.
I don't want to propose that testing will always prevent these situations.
There were probably testers present (and I'm sure often also security testers) when
such systems were created. I think that there was simply a general lack of
risk-awareness on these projects.
There are many tools and techniques from a pure technical point of view to harden the software in security context. Some of them have automated scans which crawl through your website and might discover the low hanging fruits of security weaknesses (ZAP, Burpsuite...), without much technical knowledge from the person operating it.
The more important aspect is however the mindset with which you approach the product. The tester is often the first person to discover these risks, simply because of the difference in mindset.
We don’t think 'how can this work?', but 'where it might fail?'
Lets look at a security approach. There is a methodology in security called 'Threat model' (or 'Threat modeling'), which forms the security strategy before even looking at the technical details of the system. It describes the risk analysis from security point of view. It maps the set of possible adversaries which can attack our system and vulnerabilities/attack vectors which they can exploit. It helps to pinpoint the places where the system is at the weakest against a probable attack and then we can focus the security improvements more effectively.
Lets look at a security approach. There is a methodology in security called 'Threat model' (or 'Threat modeling'), which forms the security strategy before even looking at the technical details of the system. It describes the risk analysis from security point of view. It maps the set of possible adversaries which can attack our system and vulnerabilities/attack vectors which they can exploit. It helps to pinpoint the places where the system is at the weakest against a probable attack and then we can focus the security improvements more effectively.
This methodology is in many aspects similar to the exploratory testing mindset. Both try to learn about the system, exploratory is more general, threat modelling has a more specific scope. I admit that I never yet done threat modelling professionally. However from testing numerous systems it is clear that something similar as threat modeling appears in the mind of a tester. When testing a system, after creating and continually refining a model in my head, I often ask myself where the places that might be exploited are (the “attack vectors”). There are often security defects which can be found without any penetration testing experience - buggy password prompts (revealing information or allowing unauthorized entry), data leaks, unencrypted storage of sensitive data, etc.
Usually these are not identified thanks to any specification or predefined test case (when not focusing especially on security aspects), you just need to look for these loose ends when you navigate through the system.
TL;DR
Any person with the right mindset can make a difference in software security. In absence of a security specialist, this person is often your software tester, I hope you have one;)
Visit us for more info
Comments
Post a Comment