Podcast | ESG and AI – with Alyse Mauro Mason, Christine Livingston and Mark Carson

Artificial intelligence, including but not limited to generative AI, has taken the world by storm over the past year, with organisations worldwide scrambling to identify where and how it can be used to strengthen their businesses and grow competitive advantage. Without question, as companies move quickly to employ these technologies, ESG issues come into play at a number of levels, from sustainability reporting to social and governance risks that must be managed.

In this episode of Board Perspectives, Protiviti Associate Director Alyse Mauro Mason talks with Protiviti Managing Directors Christine Livingston and Mark Carson about AI and how it can be used in sustainability, some examples of use cases, and where you can leverage AI to identify opportunities and risk.

Christine is responsible for artificial intelligence/machine learning and innovation solutions at Protiviti. With over a decade of experience in AI/ML deployment, she has delivered hundreds of successful AI solutions, including many first-in-class AI-enabled applications. She has helped several Fortune 500 companies develop practical strategies for enterprise adoption of new and emerging technology, including the creation of AI-enabled technology roadmaps. She focuses on identifying emerging technology opportunities, developing innovation strategies and incorporating AI/ML capabilities into enterprise solutions.

Mark leads Protiviti’s Data and Analytics Strategy practice and is the Americas lead for ESG data and tooling, leveraging a culmination of experiences from over 20 years of delivering business value through the design, build, deployment, change management and operations of innovative data and analytics solutions.

Learn more about Christine at www.protiviti.com/gl-en/christine-livingston.

Learn more about Mark at www.protiviti.com/gl-en/mark-carson.

For more information on this and other ESG topics, visit Protiviti.com/ESG. We also invite you to read our paper, Sustainability FAQ Guide: An Introduction: www.protiviti.com/sg-en/research-guide/esg-sustainability-reporting.

Board Perspectives on Apple Podcasts

Board Perspectives, from global consulting firm Protiviti, explores numerous challenges and areas of interest for boards of directors around the world. From environmental, social and governance (ESG) matters to fulfilling the board’s vital risk oversight mandate, Board Perspectives provides practical insights and guidance for new and experienced board members alike. Episodes feature informative discussions with leaders and experts from Protiviti and other highly regarded organisations.

Subscribe

Read transcript

+

Alyse Mauro Mason: Welcome to the Board Perspectives podcast, brought to you by Protiviti, a global consulting firm where we explore numerous challenges and areas of interest for boards of directors around the world. I am Alyse Mauro Mason, and I help lead the ESG and Sustainability practice at Protiviti. I’m joined today by two Protiviti colleagues, Christine Livingston and Mark Carson.

 

Christine Livingston is a managing director in our Technology Consulting practice. Christine is responsible for artificial intelligence, machine learning and innovation solutions at Protiviti. With over a decade of experience in artificial intelligence and machine learning deployment, Christine has delivered hundreds of successful AI solutions, including many first-in-class AI-enabled applications.

 

Mark Carson leads Protiviti’s Data and Analytics Strategy practice and is the Americas lead for ESG data and tooling, leveraging a culmination of experiences from over 20 years of delivering business value through the design, build, deployment, change management and operations of innovative data and analytics solutions.

 

Today, we’re going to discuss artificial intelligence, otherwise known as AI, and how it can be used in sustainability, what it is, some examples of use cases, and where you can leverage AI to identify opportunities and risk. Christine and Mark are joining us today in their personal capacity to share their experience, expertise and insights with us. Christine and Mark, welcome to Protiviti’s Board Perspectives podcast series. Thank you so much for being with us today.

 

Christine Livingston: Thanks for having us, Alyse.

 

Mark Carson: It’s a pleasure to be here. Good to see you again, Christine.

 

Alyse Mauro Mason: I’d love to jump right in with a first question that I’m sure is on all our audience members’ minds. The topic of AI has been never more popular. It’s the topic of almost every conversation we’re in, particularly with all the press related to generative AI like ChatGPT. AI capabilities are so much more than just generative AI. Can you spend some time grounding us on AI’s capabilities, helping define it for the audience today?

 

Christine Livingston: Sure. It’s one of my favorite questions. Alyse and I always like to remind people that AI has been around for a long time. You may remember a big moment of awareness in 2011 when IBM Watson played on Jeopardy! and beat Ken Jennings, the long-running champion. Think about how long ago that was. That’s 13 years ago at this point, and a public example of artificial intelligence and its ability at that time to interpret and understand natural-language questions and search for a relevant answer in a large body of information.

 

A lot of those capabilities still exist today. Artificial intelligence, broadly, is an entire field of capabilities that can simulate human thought processes and human decision-making. You may see that come across through things like natural language processing or this ability to understand unstructured information, or the way you and I communicate. The way you and I communicate, as we are right now, is sentences and paragraphs back and forth. We might send emails of information. Computers tend to think in zeros and ones.

 

Natural language processing is a core capability of AI focused on translating the way you and I speak in long form to a more translatable data set for algorithms to understand and interpret. It’s also doing things like predictive analytics — demand forecasting, supply chain planning, risk identification, fraud identification. Those things have been around for a very long time.

 

Of course, the latest and greatest advancement in artificial intelligence has been the advent of generative AI, which is an ability to create something new and novel. It’s not a predetermined output. It’s creating something new based on the learning of that topic area it’s performed to date. As you’ve seen, this can be text, it can be video, it can be images, it can even be audio. Generative AI is changing the way we’ve thought about the field of artificial intelligence. AI historically has not been very good at creating, and now we’re seeing AI start to get into the domain and the era of creation.

 

Alyse Mauro Mason: Thank you so much, Christine. You said learning a couple of times in helping define what AI is and the learning of a machine to have a human thought process. Where does the human fall into this learning mechanism when it comes to AI and generative AI?

 

Christine Livingston: There are a couple of ways. One way up front is, humans can help and teach artificial intelligence, and that can be in a couple of ways. That can be in providing information for an algorithm or AI to interpret. In a very simple example, maybe it’s providing an image and saying, “This is an image of a cat, and this is an image of a dog,” and you’re distinguishing for the model and the algorithm — which is which.

 

It could also be in evaluating the output. In the case of a generative AI application, it could be, “Yes, you got that right. No, you got that wrong” type of information. “That’s a picture of a dinosaur, not a picture of a cat.” It’s a couple of ways humans help teach AI, and they teach it how to learn. The other role of humans and AI is to work more efficiently and more effectively, augmented with AI. How can we then use this AI that’s been developed to help us do things more efficiently and more effectively and use AI to accelerate our human processes?

 

Alyse Mauro Mason: Thank you so much, Christine. We’ve defined what AI is. We’ve talked about where the human element fits in. Let’s talk about some fears around AI — false evidence appearing real, or misconceptions when it comes to AI. What’s a common one you hear often?

 

Christine Livingston: Probably one of the most common fears that exists with AI is human replacement: We will be replaced by machines, and the workforce will dramatically change. And the workforce will change. But I love this perspective from Jed Kolko, who said, “It’s always easier to imagine the jobs that exist today and might be destroyed than it is to imagine the jobs that don’t exist today and might be created.” There’s so much truth and wisdom in that — that it’s much easier for us to see today what is going away and what is changing than being able to forecast what’s coming.

 

As we’ve developed and deployed AI applications and now GenAI applications, we’ve seen that a lot of that fear of replacement is often not grounded in reality. You create new jobs and new opportunities for people to focus on the more complex, higher-level tasks. We just talked about how humans help AI learn. Just like you might have a human employee and you need to monitor their performance and you need to provide feedback, and occasionally you need to retrain, you also need to evaluate the performance of your AI models and you need to give them feedback, and you need to retrain them.

 

I’ve also seen jobs created that are the AI monitors or the AI trainers and that are focused on, how do we ensure that our AI is continuing to function as we intended it to and that it is continuing to perform against new information and new data? A great example of this is, think if you would have asked AI what CarPlay was seven years ago. It didn’t exist. It would have had no framework, no form of reference to even answer that question. Or if you asked AI about ChatGPT two years ago, it would have had no frame of reference. There’s always this continual training process to inform your AI about the real world around you and teach it how to interpret new information as well.

 

Alyse Mauro Mason: That is extremely helpful. The truth and wisdom behind that — the performance continuously being evaluated like a human. The machine is only going to be as smart as we enable it to be. That’s a beautiful sentiment. Thanks for sharing that with us, Christine.

 

Mark, we’re going to dive into some ESG — environmental, social, governance — use cases and talk about some of the AI capabilities that can be enabled through the ESG and sustainability lens.

 

Mark Carson: We’ll probably dive a little deeper into some of these later, but I want to talk about some themes of use cases. There are a lot of opportunity here within ESG to leverage all the versions of AI, as Christine talked about — everything from being able to go use natural language processing to look at unstructured sources of information, like a utility bill, like somebody else’s sustainability report, etc., and pull the salient information out of that source that you’ll need for your own reporting. That’s a classic natural language processing need.

 

Another version might be your ability to set effective targets as an organisation. There’s a lot of research that can go into what your peers are doing in their target setting, the realities of your current footprint — let’s say, from a greenhouse gas emissions perspective, what your current footprint is, is unique and can be unique versus your peers and others. How do you baseline? There’s a lot of information to bring together to come up with points of view on that, and artificial intelligence can help you collect that information and look for signals within that information that might help you be able to set targets.

 

Another one, more on the front-end, generative side of AI, is around the storytelling and being able to draft content for you to submit into your reports. Now, obviously, checks and balances, and you should treat generative AI as a draft for you to evaluate and derive from. But it can be very helpful because it can make observations a lot of times that you’re not seeing, because you don’t think at the millions of permutations the AI model can think of when it’s looking at data. And it might come up with some interesting nuggets for you to add to your storytelling.

 

Then you can get into things like meeting your targets, driving down greenhouse gas emissions. For example, one of my clients is using visual AI to fly drones over their solar farms and look for cracks in solar panels and be able to come out and proactively either fix those or proactively repair something that’s about to break. There are things like that where you can use AI to help you become more sustainable as well — lots of different angles here.

 

The biggest one — and we’ll dive into it more, no doubt — is around data management, the fact that right now, it’s so hard to collect a lot of this data and make estimates on this data, standardise this information, so you can tell a consistent story. Artificial intelligence is very good at ingesting, standardising, aggregating and bringing information together, and even making suggestions on observations from that data set.

 

Alyse Mauro Mason: Thank you, Mark. It sounds like there’s a lot of good use cases here. Specifically, you mentioned data management, storytelling, target setting, and, Christine, I’d love to hear from you about how enterprises are using AI specifically within their organisations in these use cases. There are going to be risks and opportunities. When you think about AI from an opportunity and a risk perspective, and the symbiotic importance of both, can you help define that further — how boards and companies can be thinking about the risks and opportunities within specific use cases or at large?

 

Christine Livingston: It’s important to be thinking about the opportunity and risk of specific use cases intentionally. When you think about the opportunity and the risk, it’s no secret that AI is here, and it’s here to stay, and it’s making a transformative impact in companies everywhere. There were studies done throughout 2023 that indicated that people who use generative AI capabilities were 40% more efficient than their peers who didn’t. There are also some statistics and studies that say that for every dollar a company invested in AI, they realised an average of $3.50 in return. Clearly, no one can complain about a 350% ROI or a 40% efficiency gain.

 

What that means is, now is the time to invest in this, to learn and to deploy meaningful applications so you’re not left behind. You need to do so very responsibly. There are real and unique risks that exist with generative AI in particular, such as hallucinations, probably one of the most talked-about, most well-understood risks or concerns of generative AI specifically — the generation of text or information that is erroneous or nonsensical or entirely detached from reality. We’ve seen some big examples of people who relied on a response from a generative AI model without properly fact-checking its information, and realised that the very confident answers they received were very wrong. They were factually incorrect but, again, very compelling and very confident in their responses.

 

As you start to think about hallucination, privacy and security is another very real concern and risk with artificial intelligence. It’s important to understand how you’re going to think about the opportunity that you can’t be left behind on — but adequate governance and risk management so that you’re using this technology ethically, responsibly and in a way that’s going to produce meaningful results with humans in the loop.

 

We found that GenAI, more than its predecessors in some earlier AI technologies, has required a cross-functional team to look at that opportunity and that risk together to understand why you would, and why you should, and how you could do something. It’s important for boards to stress the importance of experimenting with this technology and recognising that it is here and it is real, but also to encourage responsibility and accountability in their key leadership to ensure that it’s done responsibly and ethically.

 

Alyse Mauro Mason: Thank you so much, Christine. One of the keywords I heard there was experimenting. Essentially, test before you go live, before those impacts hit the business. Understand where those impacts may reside, and then try and mitigate those around the holistic governance over your AI and data and analytics practice within your organisation at large. Is that a fair governance way to go about things?

 

Christine Livingston: Absolutely. Very well stated.

 

Mark Carson: I’d take it one step further. It’s test and retest, and consistently test, because there is model drift. If you have enough people, a silly example is, you train a model that one plus one equals two. If you have enough people attack that model and say, “Actually, one plus one equals six,” eventually, that model can start thinking, “The world has changed. One plus one may not equal two anymore.” That’s a silly, simple example, but it’s things like that you have to consistently test for to make sure that your models are staying within the accurate and ethical and private. Those things need to be consistently tested for.

 

Alyse Mauro Mason: Mark, thank you so much for that. You mentioned data management earlier, so this topic has presented itself as a real challenge for boards and customers and clients. How can AI specifically support enabling better data management?

 

Mark Carson: Let me define data management quickly. When I think about data management, I think about the aspect of understanding what information you need to provide to investors, shareholders, insurance companies around any topic — today, it’s ESG — and understanding the stories you need to tell. Then, data management is about from sourcing all the way through to delivery of that information. It’s the lifecycle of bringing that information together, storing it securely, checking it for privacy, feeding it to the models, making sure the models remain accurate, complete, and ethical and secure, and then subsequently delivering that information.

 

Think about it across the entire lifecycle is how I would define data management. AI can be used significantly across the whole thing, from data collection, like I talked about around flying over solar arrays and looking for cracks — that’s data collection, essentially — to bringing the data in and applying algorithms to it to, say, this data is coming in in parts per million, this data came in in another unit of measure, etc. Let’s just automatically bring all that together and standardise it onto one unit of measurement. AI can support doing that.

 

AI can look at anomalies and say, “You probably meant your company was 70,000 people, not 700 million people.” It can look for anomalies in the data to say, “This doesn’t smell right” all the way to the other end of being able to then look at this large data and look for signals within that data around certain relationships within the data you may not have seen. An example like, ice cream sales go up every time the temperature goes up. It’s a silly example, but with relationships like that, that you might not have been spotting in your data, it can make suggestions to you.

 

You can use it along the entire spectrum, but right now, using it to make sense of standardised data on the sourcing side seems to be especially where people are interested in leveraging AI because this data is coming in on backs of napkins — it’s coming in from everywhere right now because it’s a relatively immature discussion.

 

Let me give you another level of detail from a board perspective — around reducing risk. Let’s move it out of the environment side and move it to more of the social side: wringing forced labor and child labor out of your supply chains. There are a lot of angles at which you can analyse that information, one of which may be customer service calls and internal HR calls that are taking place within these companies.

 

You’ve probably dialed in before and heard a robot tell you this call may be recorded and monitored for quality. In the past, you had humans listening to a statistically significant number of calls to see if they could glean any information from that call that could make their customer service better, what have you. AI could listen to every single call and parse every single word and evaluate for sentiment: Does the person dialing in sound young? Don’t trust it, but do they sound young? Does this person calling in sound nervous? There are all kinds of things that AI can do, and it could listen to every single call — legally and ethically, of course. There’s a lot of power there in AI to help you dig that much deeper into some of the challenges you’re trying to remediate.

 

Alyse Mauro Mason: Thanks so much, Mark. Thinking about now, Christine, in the ESG context, how do you see AI assisting in customising ESG reports, data disclosures, for different stakeholders? Examples could be investors, customers, board members and even regulatory bodies.

 

Christine Livingston: This is a great use case of generative AI specifically. All those stakeholders you just mentioned are looking for information out of the same data. They just want to slice-and-dice it and visualise it a little bit differently. One investor might want to see everything in red. One might want to see everything in green. One might care a little bit more about a specific font or highlighting certain visuals more than another. And generative AI is able to support the transformation of that same information in many formats at scale. Think hyper personalisation of all these reports and templates and visualisations

 

As Mark said, AI can listen to every single call. AI can now generate every single format and visual. Give it the right prompt and the right information, and it’s able to quickly generate that output. Again, it needs to be reviewed and confirmed for accuracy. But the ability of AI, and GenAI specifically, to create truly personalised content is going to reach a scale we’ve never seen before — you can highlight for each individual, each stakeholder group or each corporation the data they care about the most. And AI is able to do that all at almost-immediate scale.

 

Alyse Mauro Mason: A lot of sustainability leaders at companies have very small teams, so having to do the same slice or different cuts of data for four stakeholder groups is extremely time-consuming. Being able to leverage technology to communicate what needs to be communicated to the different stakeholders is a huge value-add. For any sustainability leader, it’s going to be music to their ears. The ability to do that with the right governance, of course, and legally and ethically is going to be very important. That consistency of information and communication to the different stakeholder groups with a trusted set of data, that’s exciting to hear, and I hope this continues to see traction and transformation within the sustainability space specifically.

 

Let’s talk about the future. I know none of us have a crystal ball, but there are certainly ways AI can be used to predict future ESG challenges, opportunities and risks based on historical and current trends.

 

Mark Carson: I’m going to leave the future talk for Christine. Let me circle back on where this is going to become that much more useful from a pragmatic standpoint. A lot of these use cases Christine and I have talked about, and capabilities, they’re going to become that much more user-friendly.

 

For those of you on the call that might remember the beginnings of email, you might remember some of you had to log in to servers with command-line prompts and all kinds of stuff, if you’re a Gen Xer like me. And then, all of a sudden, nowadays you have the computer reads your face and then your email is there.

 

The same thing is going to happen when it comes to leveraging AI specifically for ESG-related use cases. It’s going to start being more and more embedded into products to where — to use the example Christine said around customising fonts and presentation methods — it won’t be long before you’ll be able to spit out your final-draft report, and then you’ll be able to speak to that draft and say, “Change this. Change that,” and it will change it for you, and it’ll all be part of your ESG reporting tools, for example. It won’t be, “Add on this. Bolt on that.” It’s going to become that much easier for the layperson to use.

 

Alyse Mauro Mason: It’s like thinking, anything unfamiliar is uncomfortable. The more we talk about it, the more we use it, the more we experiment, it becomes more normal. Email and the internet were these brand-new things people had no idea how they worked. They didn’t know how to use them. They didn’t know what they meant. Now, you don’t even think twice about leveraging email and sending emails. I’m sure everybody can probably attest to that given the number of emails that maybe have popped up as you are listening to this podcast alone. With innovation, it creates that connection point, that communication, that access to information faster — with more reliability, hopefully — and then that distribution of information.

 

Mark Carson: It’s going to be a lot easier to use. The layperson will have access to these tools where you don’t have to be someone who focuses on this for a living to use the products. Right now, I think all these things we’ve talked about now, they’re going to get that much easier to use.

 

Alyse Mauro Mason: It’s a great point, Mark, and the user experience is extremely important in this context. Thank you so much for making that point. Christine, putting on your into-the-future thinking hat on, what are some challenges and opportunities and risks in this area people should be thinking about in the next year, three years, five years, using the knowledge you have and that historical and current trend element of AI?

 

Christine Livingston: You’ll see innovation accelerate like it never has before. The pace at which we’re moving and creating more and more capable models has accelerated exponentially even over the last couple of years. You’ll see that pace of change continue to increase, which is sometimes hard to believe because it feels like it’s changing on a second-by-second basis. We’ll continue to see that trajectory for quite a while with the GenAI space, as it is new in the technology world.

 

You’re also going to see a lot of interesting things happen around data-collection practices, privacy practices. There’s a lot happening in courts right now — a lot of regulation around AI. We’ll see a lot of things develop over the next 12 months around how information can be collected and used.

 

On the opportunity side, you’re going to have an interesting ability. Mark hinted at some of it — to think multimodally and to engage with information in a way we never have before. If I’m looking at my water usage, do I even need a dashboard anymore? Or do I just walk over to my water meter and say, “Send me a report telling me how your usage compares over the last year.” Can I actually ask things? “Tell me what’s wrong with you? Why are you broken? How can I fix you?” Those types of questions and engagements become feasible and plausible in the near future.

 

What we thought the promise of Alexa or Siri technologies was, that’s going to start to be realised. Could you say, “Alexa, generate my ESG report for 2023”? Possibly. You’ll see a lot of interesting innovations and this technology being applied in ways we never even thought it could be.

 

Alyse Mauro Mason: That sounds like magic — being able to ask Alexa that question and seeing, like Mark said, that first draft, that first output. In order for the innovation to continue, we need to be leaning in and leveraging these technologies, experimenting with them and getting curious about this from every business, every industry perspective.

 

Mark, to close out our conversation today, what are three key takeaways you want our audience to remember as they leave this conversation today?

 

Mark Carson: Number one is, the horse has left the barn. AI is here to stay. It’s only going to get more sophisticated, and it’s going to be up to all of us to ensure that it is doing more good than harm. It’s definitely going to have ramifications on both sides, as any major innovation does. It’s going to be up to all of us to look after it. Perhaps that’s two wrapped into one. Number one is, it’s here, and it’s not going away. Number two is, it’s up to all of us. It’s going to take a community to make sure that we keep this thing going in the right direction.

 

Number three is the adage “The best time to plant a tree was 20 years ago. The second-best time is right now.” We’re at that point with understanding AI and appreciating it and building the foundations within your organisation to enable it. You’ve heard it 100 times, but I’ll say it 101: A company and/or an employee is not going to go under or lose their job because of AI. They’re going to go under or lose their job by a company that has embraced AI. Those are my big three.

 

Alyse Mauro Mason: Thank you so much, Mark. Some of the keywords I heard here today from both Christine and Mark: Embrace it. Take AI into your organisations, get curious, understand it, learn more. Continuous education, continuous performance-evaluation. Tests, and then retest, experiment. Invest in order to see those returns.

 

And then, Christine and Mark both said it’s no secret: AI is here, and it’s here to stay. Let’s do this as a community. Let’s do it as a practice. Share knowledge as often as you can.

 

I want to thank both Mark and Christine for being here today. And thank you to all our listeners for being loyal listeners. We hope you enjoyed our conversation today about AI and ESG, what it is, how it could help with your business needs, and how to identify and navigate opportunities and risk. If you have any questions, please reach out to us at Protiviti. Until next time, take good care.

Loading...