AI: to implement or not to implement

A few days ago, I spoke at the GRC Summit in London about whether Artificial Intelligence is a case of TCO outweighing the benefits. Here is an edited version of my talk. I was speaking in my capacity as the COO of Ditto AI, an explainable AI company. The views however are mine though I make factual references to Ditto AI’s technology and customer offerings. The audience was Governance, Risk and Compliance professionals. 

At the risk of sounding like an airline captain’s sign off, let me open by saying thank you for choosing to attend this session. We know you had the choices of attending an expert talk on risk and hearing a panel discuss how auditors could be evolving into strategic advisors.

So, thank you for being here!

With a detail-oriented audience such as you are, let’s start with the definitions first — what is AI?

In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, just like humans and animals can.

Practically speaking, today a majority of AI advancements are about Machine Learning or ML.

ML algorithms use statistics to find patterns in large amounts of data — or Big Data — and use those patterns to make predictions. So, an ML algorithm may examine your search history and your chosen shows and the shows you actually finished to predict what shows you may watch next on Netflix. Amazon does the same.

This type of data mining led intelligence has limitations as we have all experienced. In 2000, one of my closest friends was expecting a baby. I sent her a funny book about pregnancy. After that Amazon showed me pregnancy books for a good eight years. Not terribly intelligent, artificial or otherwise!

To be fair, it has however got better and I use the term better lightly. In the USA, effective data mining by the store Target meant that the store “knew” a teenager was pregnant before she had told her dad. Creepy but potentially useful.

Machine Learning’s cousin Deep Learning is quite powerful. It has led to many breakthroughs in facial recognition, voice recognition, hyper real photo and voice synthesis such as the AI news anchor China unveiled last week, and AlphaGo, the programme that beat a human at the complex game of Go. Experts have cast doubt on whether this AI news anchor has any intelligence at all!

All this is a small fraction of the promise of AI, what AI could be.

Some believe that with enough data, we will reach Artificial General Intelligence or AGI. But right now, we are far from it and missteps in the meanwhile may be, and are, expensive.

Most recently we learnt that Amazon scrapped its use of a hiring algorithm which was being used to rate applicants for tech jobs on a scale of 1 to 5. Yes, rating applicants just as you and I rate products we buy from Amazon!

It was found that the algorithm was not rating women fairly. It was deprecating female applicants’ scores for attending women’s colleges, leading women’s chess or sport teams etc. No prizes for guessing this bias against female applicants got hard codified into the algorithm by the data set used for training it. The data set contained data from tech applicants who had been hired in the preceding ten years. The vast majority was male.

The training data set matters. A lot.

In the UK, we heard about how a complaint was filed with the MHRA against Babylon’s algorithm misjudging the symptoms of a person having a heart attack for a panic attack. A real life and death mistake if there were one!

The interpretation matters too. A lot.

To a user, AI is still a black box.

An AI bot will give you answers but it won’t explain why. It often cannot explain how.

Even if the algorithm is made public as the French government recently demanded of AI being implemented in the public sector, most users will be hard-pressed to make sense of it all.

To those of you in the audience, who work in audit or compliance, that inscrutability is a serious challenge.

Naturally, when you consider the use of AI in your lines of work, it is a legitimate question to ask — is the cost really worth it?

Like most complex questions, the answer is — it depends.

“Depends on what?”, you ask.

We take a little segue here to talk about Explainable AI or XAI.

XAI solves the black box problem of AI.

That is the AI that we develop at our company Ditto AI. Our patented methodology delivers rapid automation and deployment of complete, correct and consistent knowledge bases. This helps us deliver answers to a user — but also to show our workings. These workings take the form of an audit trail and an explanation in plain human language also provided to the user.

The output of XAI that we develop allows for human oversight of the outputs or decisions proffered by an AI bot.

This brings us to whether there is any benefit in using AI in the risk, compliance and governance functions. And if yes, what those benefits may be.

AI is already in use in compliance in financial services and other sectors. The benefits are robust and in evidence.

For instance, we at Ditto AI are servicing clients in sustainability and waste management compliance. NHS Scotland, Mitie and Suez are some of our blue-chip clients using our platform for compliance with waste management regulations.

Similarly in the financial services industry, especially investment management, large number of transactions need to be screened for KYC, money laundering, market manipulation, insider trading, and third-party risk.

Implementing an AI-driven compliance system can enable a business to handle large volume transactional compliance, checking for indicators of fraud or other malfeasance, or comparing different sets of data to check for red flags.

When AI handles this aspect of compliance, humans can focus on more sophisticated analysis of red flags, and more strategic compliance concerns.

The benefits of AI in risk and governance are emerging and strong but they rely on more advanced AI than just parsing large volumes of data.

When legacy technology is used for risk monitoring, it looks for threats by known signatures and pre-built event detection logic. The results may not always align to business risk and instead be tech-first rather than risk-first.

AI is most useful to the risk function when handling and evaluating unstructured data. The kind of information that doesn’t fit into spreadsheets with their net rows and columns.

AI that is built using cognitive technologies, such as natural language processing (NLP), can deliver the ability to analyse such unstructured text to deliver insights.

It is a legitimate question to ask at this point — is my organisation really ready for all this?

The simple answer is — probably more ready than it may seem.

For instance, many organisations — even outside the compliance sensitive banking and insurance industries — have built data science teams in marketing and sales.

With leadership and strategic will, this data science capability can be extended to risk management, audit and compliance functions.

If a central data science and analytics function already exists in your organisation, then you are already ahead of many and can deploy some AI capability quicker in the service of governance, risk and compliance functions.

Which brings us to the question we are really here to ask — does the TCO of using AI in your businesses outweigh the benefits?

The total cost of ownership of AI includes the cost of implementing, as well as the opportunity costs and the risks of not implementing.

All things told, the cost of implementing an AI solution is not much these days.

If you want to build your own AI, well, that would be a few hundred million dollars.

However, several AI services including analytics are now available on subscription. Sometimes for as little as $100 a month.

SaaS subscription models take away the traditionally high costs associated with building software, acquiring hardware, data migration, downtime, depreciation, upgrades, security, maintenance and support, and retiring it at the end of life.

To assess the TCO, one also needs to assess the opportunity cost of not implementing AI. That needs first a baseline case. It will vary by team, and by company.

A relatively simple but not easy way to establish that baseline would be, for instance, to undertake a time audit in a team.

For a month, everyone keeps track of their time and what they are spending it on. That data pooled together will help identify the main time-consuming activities. That is your base case.

Now it is up to you to consider what the team could be doing if their routine and repetitive tasks were automated.

What would they rather put that time in?

Are there backlogged strategic projects that could benefit?

Could people just work fewer or shorter hours than 10-12 hour days?

The case will vary. As will the questions. But at least your team will have a good handle on what AI may deliver to them by way of time and strategic opportunity.

Whether we work in risk, compliance, governance or any other key business role, our job as senior executives or as people advising senior executives essentially is about making decisions.

AI implemented to solve problems relevant to our work should assist us in making better decisions.

Making defensible decisions.

Making them quicker, more transparently and possibly cheaper.

In other words, it would blend technology-enabled insights with sophisticated human judgment, reasoning, preferences, and choice.

But AI is a learning technology. There is enough evidence already to show that AI is performing as well as sophisticated humans, if not better than them sometimes.

The challenge that lies ahead of us is how our work and indeed our roles will change in the near and distant future with the implementation of AI to assist our work today.

The question deserves strategic focus from not just individual functions or companies but also from industries and governments.

The answer will be shaped by human beings who remember the trajectory of the industrial revolutions that have gone by.

Human beings who understand the challenges of social inclusion, the challenge of deploying human capital meaningfully, and the question of human dignity.

Human beings who understand trade-offs beyond quantitative or quantifiable parameters.

As Sebastian Thrun, who helped build Google’s driverless car and founded Udacity said: “Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.

Not implementing AI today will not defer the future.

%d bloggers like this: