London

81 Chancery Lane
London
WC2A 1DD
01235 821 160
View map

Edinburgh

1 Lochrin Square
92 Fountainbridge
Edinburgh
EH3 9QA
01235 821 160
View map

Dublin Office

77 Lower Camden Street
Dublin 2
Ireland
D 02 XE 80

View map

Administrative Office

BH Office
Church Street
Ardington
Wantage
Oxfordshire
OX12 8QA
01235 821 160
View map

Send us a message
CLOSE X
Contact Us
13.02.2022

A brief introduction to artificial intelligence and neural networks

The phrases artificial intelligence (AI) and neural networks (NN) are used widely by the media and in everyday conversation. However, if asked to explain the difference between these concepts, many of us may stumble. The lack of clarity can make commentary about AI confusing or misleading. In this article we aim to provide a brief introduction to some of the terminology involved and consider some of the possible applications – and challenges – of neural networks.

AI in daily life

Imagine a life where you wake up in the morning, and your voice assistant opens the curtains, turns on the lights, and reads the news at your command. On the way to work, you take a self-driving bus or taxi and browse some videos on YouTube and TikTok that are suggested based on your previous browsing history. You enter the building, and the security gate is unlocked instantly by scanning your face rather than fingerprints. When you come to the office and log into your email account, all filtered spam is in the bin, and all events are created automatically according to your email contents. You attend a virtual conference with a virtual reality (VR) device right in your office. Some people from other countries speak foreign languages, and the machine translates what they are saying instantly. After work, you apply for a new credit card online, and an AI-powered risk management system instantly approves it. Then you try on and purchase an expensive suit online without visiting the shop, thanks to augmented reality (AR) fitting.

All these scenarios are supported by powerful AI technologies – some are already happening and those that aren’t are not far off. Andrew Ng, one of the leading proponents of AI, said in an interview for Stanford Business that “just as electricity transformed almost everything 100 years ago, today I actually have a tough time thinking of an industry that I don’t think AI will transform in the next several years.”

AI in insurance

Compared to some sectors, the insurance industry has been relatively slow to adopt AI. However, there are a few companies putting their ideas into practice. Here are some examples related to pricing, underwriting, and claims processing.

Lemonade: an insurance company based in New York, Lemonade uses AI to reduce paperwork and phone calls from underwriting work and to process claims in seconds. They emphasise that AI is only used to approve claims and never to deny claims, in order to avoid discrimination.

Cytora: an insurtech company in London, Cytora states it uses AI to help set more accurate premiums based on better external data.

Shift: a company based in France, Shift provides AI-native solutions which they claim help insurers deliver fantastic customer experiences while helping with underwriting fraud detection, claim automation, claim fraud detection and financial crime detection.

Tractable: another insurtech company in London, Tractable uses photos to assess damage at the first notice of loss when a car accident occurs. It applies AI to calculate repair costs swiftly and accurately. Their website says the “result is faster response times, better customer experience, and a quicker resolution of claims thanks to instant and precise information”.

Similar applications can also be found in China, where some insurance companies provide pet and livestock insurance using face recognition and claim automation. We can expect more start-ups and companies to join the game and drive the integration of AI in the insurance industry.

AI, machine learning (ML), deep learning (DL), neural networks (NN) – are they all the same?

The answer to this question is not unique, as it depends on who uses the terms and where they are mentioned. Formally, they are not the same concepts in computer science but they are highly related to each other. A mainstream opinion of computer science considers ML as a branch of AI, and DL as a subfield of ML that is based on deep neural networks (Figure 1, Table 1). The formal definitions of these terms in computer science literature make use of some involved assumptions and explanations, so below we have instead used the key points of each term to describe and differentiate them.

Figure 1

Table 1

DL had historically accounted for only a small part of AI in academia, but its explosive growth in the past 10 years has seen it become one of the most prominent machine learning techniques in use today. Many new AI applications are driven by neural networks.

The degree to which the distinctions between each of these four terms matter depends on the people involved.

It should be noted that neural networks can also be used to tackle problems outside of AI – for example nonlinear function approximation – but that will be the subject of a future article.

Types of neural networks

There are many different types of neural networks, but the three basic (most popular) types are referred to as Artificial, Convolutional and Recurrent (Table 2). This terminology is somewhat confusing, as Convolutional and Recurrent neural networks are clearly ‘artificial’ in the sense that they are not biological. In the early literature, ANNs referred to any non-biological neural networks. However, since the development of modern CNNs (LeNet-16, 1989) and RNNs (1989), most people now use the abbreviation ANN as the name for standard neural networks (Figure 2), especially when CNN and RNN are mentioned at the same time.

Table 2

ANNs are the simplest neural networks and sometimes also called standard/fully-connected neural networks. As shown in Figure 2, nodes (neurons) in the network are connected to others with right arrows. The first (left-hand) column represents the input data. Each column in the middle is a layer that detects and extracts the data relations from the previous layer. The last (right-hand) column is the output that we require, either a predicted value or some probabilities. Again, note that we plan to write a subsequent article focusing on the technical aspects of ANNs, so no further detail is given here.

Figure 2

A CNN works like a scanner on images with several different filters (usually squared). Each filter generates a new smaller image after scanning, as shown in Figure 3. Therefore, CNNs can ‘read’ the image and extract features automatically for further analysis. These extracted features are then used to recognise contents in the image, through an ANN. Because of the ability of image feature extraction, CNNs are powerful tools for image-related applications.

Figure 3

RNNs are the only neural networks that involve time variables. That is why it works for sequence data like sentences, sounds, rhythms and time series. A RNN processes the sequence data (x1, x2, x3) one by one and returns a sequence of data (y1, y2, y3), as shown in Figure 4.

Figure 4

We emphasise that CNNs and RNNs are not subsets of ANNs, as they are fundamentally different structures. It is common for a CNN or RNN to be followed by a shallow ANN, which is used to make the final predictions. Neural networks composed of more than one type of neural network are called hybrid neural networks. Complex real-world applications, such as autonomous driving and speech recognition, rely on hybrid neural networks.

In practice, there is no universally best architecture for neural networks and it all depends on the specific applications and the data types involved. This is a famous concept in the AI industry known as “No Free Lunch”.

Are DL and neural networks universally applicable?

Some people (including me) see DL as a universally powerful tool for all applications as long as sufficient data is involved. However, in practice the answer to this question is no. I explain the main reasons below.

First, for most applications, to train a neural network we need adequate data. As illustrated in Figure 2, the larger the dataset, the better performance we have. Whilst researchers and computer scientists in academia continue developing sophisticated algorithms and neural network architectures to achieve better performance on limited or existing datasets, in practice significantly improved performance is only possible by collecting and using more data and by utilising larger neural networks.

Figure 5

Second, neural networks are black boxes to humans. To explain this statement we start with an example. We know that humans recognise animals by features like shapes, colours and more. CNNs recognise images by features as well. But if we print out the features extracted by CNNs, these features appear mysterious to us and are not comprehensible. E.g. Figure 6 is a CNN filtered image which is recognised as a magpie by the machine. Because we do not understand the internal processes within neural networks we cannot explain the results or decisions made by neural networks. In the worst scenario, a decision made solely by machines without explanation may be illegal according to GDPR’s “right to explanation”. Besides, for the insurance industry, regulators require that insurers have a proper understanding of their models and products, something that is inconsistent with the black box nature of neural networks. This is probably one of the key factors behind the relatively limited uptake of deep learning techniques within the insurance industry to-date. Fortunately, a new but active field known as “Explainable AI” may be able to overcome this drawback by providing new explainable algorithms and better explanation for existing algorithms.

Figure 6

Third, the cost of implementing DL is non-trivial, especially for business-level applications. Anyone who plans to employ neural networks in business should consider doing so carefully. Designing and training a neural network from scratch is both time consuming and challenging, with no guarantee of a successful outcome. This may be another reason why insurance companies themselves are more conservative in their attitudes towards AI than insurtechs – a lack of both professionals familiar with AI and computing resources.

Conclusion

In summary, DL and neural networks are handy and powerful tools for data-related tasks. Although the black box nature and data/expertise requirements of neural networks are current obstacles, their use is likely to grow significantly in the future as we live in an era of data explosion, with an unprecedented amount of data being generated and captured every day globally. From this perspective, we can expect more applications in various industries, some of which may be game-changing. In the insurance industry, more and more insurtech companies are appearing in the market and provide AI-based solutions to pricing, underwriting, and claims processing. As experts who deal with data on a daily basis, all actuaries should be somewhat prepared for this rapidly evolving technology.

Chaofan Sun

February 2022

References

Andrew Ng: Why AI is the new electricity

Lemonade’s Claim Automation

Is commercial insurance pricing ready for AI?

AI Estimating & Triage


https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
https://serokell.io/blog/ai-ml-dl-difference

No Free Lunch Theorem for Machine Learning


https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html
https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788397872/1/ch01lvl1sec27/pros-and-cons-of-neural-networks
Géron, Aurélien. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. ” O’Reilly Media, Inc.”, 2019.
LeCun, Yann, et al. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1.4 (1989): 541-551.
Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. “Learning representations by back-propagating errors.” Nature 323.6088 (1986): 533-536.