Future of risks

Future of risk management with Artificial intelligence (AI) implementation

AI is vexing boards around the world who are asking whether it presents an opportunity or a threat. The concept of AI began in the 1940s, pioneered by Alan Turing, amongst others, who famously proposed the “Turing test” to assess whether a computer was “thinking”. The argument (roughly) is that a computer is considered to be thinking if a human cannot tell whether it is talking to the computer or to another human. It was fitting, then, that Professor Sir Adrian Smith, the Institute Director and Chief Executive of The Alan Turing Institute, opened up a discussion, convened by the Institute of Risk Management and hosted by Egon Zehnder, to explore how should risk leaders of the future work with AI? This article describes the topics discussed. The UK, along with many other countries, has concluded that AI is a critical new capability in our future and has therefore created The Alan Turing Institute to focus on the discipline. Some 400 researchers across thirteen UK universities are brought together in the Institute. Disciplines include mathematics, statistics, computer science, software development, and social science. This diversity of skill set underscores the first key point: AI is multidisciplinary; it is difficult for any one person to understand all the issues and intricacies. So, risk professionals are going to need to rely on a range of expert advisors when working with AI, but will also need to develop some skills themselves to be able to interpret what they are being told.

What is Artificial Intelligence (AI)?

Sir Adrian gave an analogy that some approaches in AI are similar to fitting a line through data. For line fitting, there is a known formula that explicitly finds the two parameters, the slope of the line and its position. Rather than algebraically solving the problem, another approach, he noted, would be to try lots of lines and stop when the ‘fit’ looks ok. This is basically how deep neural networks work, except the number of parameters can run into millions. ‘Fit’ is defined by an error function, effectively measuring how close the model gets to the past data; and ‘try lots’ is shorthand for Stochastic Gradient Descent – which gradually adjusts the parameters to improve the fit of a few data points at a time. This approach has become practical recently due to increases in computing power, advancements in algorithms, large amounts of data, and cloud storage.

 AI, Machine Learning and Deep Learning

AI encompasses several other techniques, however, many of which have been tried with varying success over the past 70 years. A useful categorization was published in an MMC ventures paper “The state of AI: Divergence” which used AI as the generic term which then includes as a subset “machine learning”, which in turn includes “deep learning”. AI methods include expert systems, where human rules are specifically coded into the computer. These work in some circumstances, but it has, so far, proved challenging to use them for highly complex tasks such as “having a natural conversation with someone” or “making a cup of tea in a stranger’s house”. Machine learning avoids the need to codify by hand, by extracting the rules from the data in some way. This method can either be “supervised” where the data is labeled already, for example, “this image is a cat”; or “unsupervised”, where the algorithms try to find patterns from the data directly with no human intervention. Machine learning methods include: Support Vector Machines (SVMs) which try to determine clusters in the data; decision trees, which repeatedly subdivide the data into categories that explain the data well and Random Forests, which try lots of decision trees automatically to find a really good one. Deep neural networks are a special type of machine learning originally inspired by a loose analogy to how brains function. They take data as input and flow it through lots of “neurons” which “fire” if the data is exciting to them. ‘Excitement’ is defined by combining parameters and the data using a fixed calculation which, if a threshold is exceeded, passes the answer forward to the next layer of calculation. In modern deep networks, there are lots of layers and this is why they are called “deep”. It is important to realize that the only calculations involved are multiplying, adding, and taking maxima. As such, once a neural network has been “trained” (to find some good parameters) we know precisely how it will work in principle. The training process uses some hard maths but essentially this is the step Sir Adrian described as “try lots of lines”. It can take some time to work through the data and this is why such large grids of computers are needed.

What can AI do?

The basic point of AI is that it can work through vast data sets with a specific task in mind. Humans could in principle do this too, but in practice, it would take too long or be too costly, or too boring. This is not a new phenomenon. Spreadsheets completely revolutionized modern businesses in the 1990s and they changed the workplace significantly, allowing more complex tasks to be done quicker and raising expectations of what is possible. When it comes to AI, it is important for businesses to be aware of the art of the possible. For example, a neural network could review every recorded call ever made to their company to look out for trends in complaints, or questions. Chatbots are already used ubiquitously to handle easy requests and forward harder issues to their human counterparts. Algorithms can find patterns in data to enable a better understanding of customers (see case study). Documents can be read automatically and compared, flagging key changes to human reviewers. Images can be reviewed, looking for actionable signs such as finding cancer in a scan or determining whether a car has been damaged in an accident by comparing before-and-after photos. Complex searches can be carried out to assist a human case handler to find the key information, regulations, or laws to answer a business question. AI does not have to provide a final answer, some of the best uses involve working hand in hand (metaphorically) with a human user.

There are many megatrends affecting societies today, technology is one, but others include climate change, aging populations, disenfranchised youth, population growth, stagnating economic growth, aging infrastructure, and low financial inclusion. AI offers solutions that can help with some of these and, as one attendee mentioned, “society needs this to succeed”. For example, in developing countries, AI can help enhance financial inclusion by providing bespoke advice and financial service pricing at a fraction of the cost of human broking. In poorer communities, this may be the only way that such products can be made available.

What impact will AI have?

Historically, industrial revolutions have been disruptive but more new jobs have been created than destroyed. The second industrial revolution, from 1850 onwards, saw electrification, global communications, widespread use of artificial fertilizer, and increases in mobility. Due to these, the working class was redefined and the professional middle class was born. Opinions at the round table differed on whether the technological revolution, including AI, will be different. Some felt that change could be so fast that some segments of the workforce may not be able to adapt fast enough to avoid becoming left behind. In this case, the welfare state will need to evolve, and companies will have to think carefully about their moral duties beyond a traditional profit motive. This led to a discussion of culture. “Culture eats strategy for breakfast,” said Peter Drucker and it was generally agreed that this is correct. Even highly skilled professions, such as radiologists, may be at risk of redundancy. The workforce realizes this, and low morale could lead to blocking behaviors and poor uptake or adoption of new technologies. These concerns are valid and must be taken into account in any change program. Senior leaders cannot expect the workforce to be altruistic and must find a narrative that is inspiring and inclusive. The Chief Risk Officer (CRO) should consider poor staff morale to be a key potential risk within the firm.

Human vs AI – a dangerously opaque black box? AI Risks in Context Developing AI Capabilities As noted, the calculation steps are fully transparent, and there are no steps hidden from view in principle, although whether they are widely or easily understood is another question. But we have to compare this with real human decision-making. We may be able to tell a story to explain how a decision was reached, but MRI imaging has clearly shown that often decisions are not made by the logical reasoning part of our brains. Instead, the emotional brain center (the amygdala, or “lizard brain”, so named because it was the first to evolve) is often involved, especially when we are tired or encountering new situations. Human decision-making is also subject to a range of further biases. So, we need to be careful when using the term ‘black box’, as it can arguably apply to us more than to most AI systems. There are certain problematic issues with these algorithms, however, such as not being sure which input data will have the largest effect. Similar to complex ecosystems in nature, strange and different results can appear from seemingly similar data. This may be more an issue of us wishing to be in control than the AI decisions being wrong. For example, it wasn’t long ago that DNA data was not accepted in court as proof of identity, but attitudes have changed, and this scientific breakthrough is now accepted as evidence. AI decisions may be similar as society becomes familiar with them. Such issues can also arise because the training data was biased to start with however, and this is a concern of regulators, especially in relation to anti-discrimination laws.

AI Risks in Context

If society needs AI to succeed, then what could cause it to fail? Some felt that the media poses a threat to uptake based on the reception that Genetically Modified Organisms (GMOs) have had in Europe. Labeled “Frankenstein food” this led to a public backlash. By analogy, “algorithm kills child” is the headline none of us wants to read. Yet, “driver kills child” is sadly an altogether too frequent event, so frequent that the media often doesn’t even cover the story. A key role businesses and CROs can play is to highlight the relative risk of various choices. Nothing, as our profession knows well, is risk-free. Trust is built up over many years but can be lost in an instant, so a strong sense of ethics and careful development is required, from academia and business, to avoid this outcome.

Developing AI Capabilities

Many new systems have been developed to provide access to AI methods to a lay audience. These low-code or no-code options are becoming standardized. Nevertheless, at present, and probably for some time to come, we will need new talent that is comfortable using the new techniques. By analogy, basic spreadsheets may be ubiquitous, but few employees are comfortable with look-up tables, visual basic, or other deeper parts of their toolbox. The UK government and its research councils, advised well by the academic community, foresaw this and facilitated the creation of multiple master and doctoral training centers. So there will soon be a steady flow of expertise coming into the workforce. Several companies around the table were encouraging their staff to retrain by taking master’s courses. Professionals such as actuaries are also considering adapting their curricula. The IRM has introduced a new Digital Risk Management Certificate providing risk professionals with an opportunity to update their general technological knowledge.

Data is the lifeblood of AI, without it the algorithms are empty – like a plumbing system of pipes and valves with no water flowing through. Yet companies are nervous to share data citing GDPR restrictions or IP concerns. It is certainly true that data should be considered a strategic asset, but, unless it is explored, the value it contains will not be exploited. One way to avoid privacy and IP concerns when building or experimenting with AI systems is to create a synthetic data set. Such data has the same form as real data but is actually artificial. So model development can be outsourced using synthetic data and then brought in-house when it is ready to be run on the real data, behind closed doors. The case study illustrates how a synthetic data set of insurance claims was derived. A new artisan, the “data wrangler” has evolved: this is someone who can wrestle with raw data and turn it into a form that computers can interrogate. To extract the most value from data in the future it will be structured, where possible, and systematically stored along with other contextual information. More thought should be put into systems design, not just to the immediate need, but also to the future potential of any data that is collected. By thinking and planning now the CRO can help ensure that the risk of future data complexity is minimized. Public attitudes to data tend to vary: if they see how they can benefit from sharing data they are more likely to consent willingly to sharing.

AI and the Chief Risk Officer and Risk Management Community

An intelligent summary of AI So what does AI mean for the CRO of the future? Reassuringly, it doesn’t look like the role will be replaced by a black box, but risk leaders will need to have multidisciplinary training and experiences rather than deep expertise in a single silo. We can expect CROs to stay involved in deciding risk criteria involving AI processes, in which case they will need access to trusted advisors who can translate the concepts. It will also be important for the risk function and its leader to have some knowledge of these concepts, and the risks arising, where data science, statistics, and computing meet, so training will be required. Reverse mentoring may be useful in this space as “digital natives” (those who have grown up with digital technology) may have much to teach the senior leadership.

Finally, companies will need to innovate to keep up with economic pressures and this is certain to include AI in the future. The skill set to oversee innovation is different from that of a business-as-usual function. Innovation projects should be seen as experiments with a clear hypothesis that gets explored and tested. CROs should be reassured if they see a series of experiments with null results followed by a few successful trials that scale up. This will require a changing attitude and redefinition of “failure”. In the innovation context ‘failure’ means ‘not learning’, it does not mean ‘not trying’.

Source: Risk, Science and Decision Making: How should risk leaders of the future work with AI? An IRM Report of a CRO Round-table Discussion.

admin

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *