By: Dr. Gary Anderberg

By: Dr. Gary Anderberg

null

August 14, 2023 — We've already commented on some of the major risks which can attend AI projects or the lack of safeguards around the use of AI buddies like ChatGPT or his kissing cousin, Claude 2.0. But what about risk management as a user of AI? Shouldn't we be in the line for new AI projects to help us reduce or analyze risk more closely? For example, what about deploying AI to turn text information into something we can analyze for exposure trends or documenting video meetings so we have a reasonably objective record?*

This question came to mind in reading an excellent article in ITL, "The 10 Biggest Mistakes in AI Strategies" (The 10 Biggest Mistakes in AI Strategies | Insurance Thought Leadership). This essay provides very useful advice in determining when AI might actually be a useful — as opposed to fashionable — strategy for meeting management needs. The Big Ten in this list come originally from an article by Bernard Marr (cited in ITL).

  1. Lack of clear objectives
  2. Failure to adopt a change management strategy
  3. Overestimating AI capabilities
  4. Not testing and validating AI systems
  5. Ignoring ethics and privacy concerns
  6. Inadequate talent acquisition and development
  7. Neglecting data strategy
  8. Inadequate budget and resource allocation
  9. Treating AI as a one-time project
  10. Not considering scalability

ITL provides a good deal of commentary on these items, but a few of them call for a quick discussion here. Item 1, clear objectives, should be first on any project development list. If you don't know where you're going, you won't know when you get there. The corollary is the agreement of all involved on the functional definition of success. Item 3 reminds us that AI, even Large Language Model (LLM) AI, is not magic. There is a subset of system tasks that it handles well and a much larger set that it does not.

We have sounded the alarm on Item 5 before in these pages, so consider that button pushed again. Item 6 is a little more subtle. LLM AI is not the same as machine learning AI, which many of us have been using for some years now. LLM AI requires specific training and skills. Getting serious about this new flavor of AI may require new talent acquisition if it's going to work well and be completed on time.

Items 9 and 10 are closely related. LLM AI has many potential uses and you may be surprised at its potential scope once you get started. It's almost certainly not a one and done project. Paul Carroll's point at the end of the article is worth pondering:

AI has moved on to figuring out how to estimate car damage from photos a driver sends, how to price risk for a life insurance policy without requiring a doctor's appointment and the taking of fluids, etc. Basically, AI is a treadmill. Once you get on — as everyone should — you can't get off. It never stops moving.

Because the GB Journal is all about risk management, perhaps we should conclude our foray into AI with this caution from Eliezer Yudkowski**: "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."


*Your faithful scribe has begun using this function for summarizing and documenting (non-GB) on-line editorial meetings. It is remarkable how well this function can work running quietly in the background.

**Co-founder of the Machine Intelligence Research Institute

Author


Dr. Gary  Anderberg

Dr. Gary Anderberg

SVP — Claim Analytics

Make Gallagher Bassett your dependable partner

When making the right decision at the right time is critical to minimize risk for your business, count on Gallagher Bassett's extensive experience and global network to deliver.

Connect with Us