Effective Security Assurance Testing
The InfoSec Consulting Series #12
By Jay Pope
The digital economy is making significant changes to the way that businesses are run. But it also presents a challenge in terms of the way in which security and quality testing of systems is carried out. Ensuring the reliable and safe performance of systems means carrying out assurance testing. This is about more than simply verifying that the software works. It’s also about implementing an organised process to understand what happens both when things go right and when there’s an error. It’s also about tracking changes and being able to consistently reproduce system behaviour. The other thing about effective security assurance testing is that there’s no ‘right’ amount. What is suitable and enough for one system may not be for another. It all depends on the level of complexity of the system and how important it is to the operation of the business; whether it’s used solely in-house; or whether it’s customer-facing.
How Much Testing?
Ultimately, the resource devoted to security assurance testing is very much determined by what the impact of a problem would be further down the line. If it would have little or no impact, then the need for testing is clearly minimal. If, on the other hand, it could lose the business time and money, then rigorous testing is imperative.
Many businesses find it helpful to quantify this using a scale to rank projects according to the level of assurance necessary.
* 0 rated projects would be those where no testing is required at all, such as experiments and prototype versions.
* 1 rated would be where the basic function of the system has been checked and the developer believes that it’s ready to use. However, this often means that there’s no formal documentation other than perhaps a list of requirements. This level would again be used for prototypes and where failure isn’t critical.
* 2 rated projects would have their key features listed in a project document. For the key functions, there would also be a documented testing regime. Any changes would also be logged, documented and re-tested. There will be a formal system for tracking bugs. This level would be suitable for production systems where a failure is unwelcome but wouldn’t be completely catastrophic.
* 3 rated would build on the above by documenting and testing secondary functions too. On systems that are likely to be heavily used, load and stress testing would also be carried out in this scenario. At this level, you would almost certainly want to have the testing handled by a separate team in order to ensure nothing is overlooked. Similarly, documentation needs to be reviewed by a developer or project manager not involved in the original system development. This level of testing would be suitable for production systems in the majority of businesses.
* 4 rated projects are those where the risk of failure would cause serious problems for the business. It would also cover systems or system components for which it is necessary to show compliance with industry or government regulations and the system may be independently audited. This requires a strict documentation regime in which each test is recorded along with its results and what version of the software it was run against. There would need to be a version control log tracking changes and showing what action has been taken to resolve any bugs or requests for improvement.
* 5 rated means that the system is critical. This would apply to core financial and medical systems where a problem could lead to a risk of serious harm or loss of funds. This requires all of the controls applied to a project rated 4, but with the addition that everything undergoes a review by a committee of experts. Testing at this level requires approaches including random data entry, defective information, tests of failure modes and simulating cyber attacks.
We’ve seen that security assurance testing can be applied at different levels depending on the nature of the project. As a business, you need to consider this in the light of several factors including your budget, timescale and corporate culture. You need to start by looking at what a system failure would cost you. This needs to be offset against the cost of introducing a QA regime to decide whether it’s cost effective. There are two levels to this; you should consider routine, minor defects, but also the cost of a total failure.
Another issue is the effect of testing itself. If the act of testing is leading to a quest for something perfect, it could be delaying the rollout of an adequate system that is good enough for your needs and the lack of which could be harming your operation. You need to take account of regulatory requirements too and what the effect of a fine or court case could be if you fail to comply.
Businesses will always be under pressure to do things quicker and cheaper and this routinely conflicts with a desire to do things well and to provide a quality service. A challenging balancing act is inherent in devising your testing plan.
Does Your Organisation Need Top Cyber Security Consultants?
We are a team of experts with extensive knowledge and experience of helping organisations improve business performance. Our highly qualified consultancy team can deliver cyber security capability at all levels of your organisation and are on hand to help ensure your projects deliver solutions that are appropriately aligned to your cyber security risk position, and meet technical, business and ethics due diligence requirements. Schedule a call above to learn more about how we can help.