London

81 Chancery Lane
London
WC2A 1DD
01235 821 160
View map

Edinburgh

1 Lochrin Square
92 Fountainbridge
Edinburgh
EH3 9QA
01235 821 160
View map

Dublin Office

77 Lower Camden Street
Dublin 2
Ireland
D 02 XE 80
01235 821 160
View map

Administrative Office

BH Office
Church Street
Ardington
Wantage
Oxfordshire
OX12 8QA
01235 821 160
View map

Send us a message
CLOSE X
Contact Us
18.10.2020

Automated testing – is it worth it?

Actuarial roles often involve the development of models, the outputs of which are frequently used to drive business decisions. It is therefore important to demonstrate that the model that has been developed is correct and appropriate. This article focuses on automated testing, which is the use of scripts to test models or processes. Automated testing attempts to minimise the human input required to carry out each test of a model.

The benefits and pitfalls of automated testing are considered under the following subheadings:

  1. Motivation
  2. Advantages
  3. Disadvantages
  4. Automation Tips
  5. Choice of Language

The examples presented below relate to actuarial models, but most of the discussion is applicable to the testing of any type of modelling or software.

1.    Motivation

Let us consider the following scenario:

You have just added a new index-linked feature to an annuity model. You spend 2 hours manually testing and documenting the results in a spreadsheet.

A week before the model is due to be used on new policies you retest the first case and, to your horror, the result has changed. A colleague who has little understanding of your model has altered a function used in the discount rate calculation (because of a separate requirement for a different product). Since the discount rate code is shared between the products, it has invalidated your previous testing and results.

You stay until 11pm, repeatedly re-running the tests. You use the test results to make the necessary corrections to the model. Since your colleague has also manually tested their changes, they will need to revisit their own tests.

You come in the next morning and your manager tells you that the format of the input mortality table will change slightly. It is an easy correction but will require the previous night’s testing to be repeated. You expect it to take all morning and therefore need to push back the other developments that your manager had planned – so not only is your time lost but the team is delayed as a whole.

As we can see, the time and effort required in a manual testing approach increases rapidly with the size of the team, the complexity of the model and the criticality of the result. Automated tests would allow us to test the model at any point with minimal effort and therefore reduce the risk of delaying other projects.

2.    Advantages

There are many advantages to automated testing but ultimately the key advantages are:

For more detail, recommended reading: https://saucelabs.com/blog/top-10-benefits-of-automated-testing

3.    Disadvantages

As with all efficiency techniques there are also disadvantages:

The code for an automated test will make assumptions (often implicitly) regarding the format of the model’s inputs and outputs. When models change it is sometimes necessary to update the automated tests (e.g. if a new column is added in a spreadsheet). To make these updates, you may need a good understanding of the model and the programming language.

Thus, the decision of how much effort to put into designing automated tests requires a prediction of how often you expect to have to run the tests and the extent to which the model (in particular, its inputs and outputs) will change in the future. Consideration must also be given to what needs to be tested and how you prove the automated testing is comprehensive enough to be useful.

4.    Automation tips

Automated testing relies on both model code and test code. The design of both are therefore important and in this section we give a few tips on how best to set these up.

4.1 Model code

4.2 Test code

5.    Choice of language

Below are a few case studies covering some of the different approaches we commonly encounter at APR – there are of course many others. Each of the options below are freely available. Proprietary testing software does exist but obviously has a cost and we have not considered it here.

5.1 Excel and VBA

Testing: Model results between different versions.

VBA is used to import the model results into a spreadsheet, and then Excel formulae compare the two sets of results and highlight any differences to the user. Since Excel/VBA are familiar tools in the workplace, this approach is easy to maintain over time.

5.2 SQL

Testing: Data extracts from a database.

Stored procedures are used to extract information from a database, relevant for a particular purpose (e.g. downstream models). Tests can be performed on the consistency of the data (e.g. for missing entries, inconsistent records) and the process halted if these tests fail, having highlighted the data error.

5.3 R

Testing: An asset valuation model.

The tests vary individual parameters in R, run the model several times and measure the impact on valuation to ensure that changes are as expected. The ‘testthat’ library is used to automate the running of the tests and produces a PDF report with % of successes, warnings and failures.

Summary

Automated testing can have a significant development cost but, when implemented in the correct setting, will save time overall and will reduce the risks of projects experiencing delays. We would therefore recommend that managers seriously consider an automated approach in any area where model development is planned to take place.

Some factors to consider are:

Jack Davies

October 2020