I recently spoke to Andrew Duncan, CEO of Infosys Consulting, about AI and leadership.
Adam: What should leaders today understand about AI?
Andrew: There’s a lot of fear out there in terms of what AI is, or what it could be. And I think one of the things that leaders need to understand is that AI is not necessarily going to replace them, but for those who don’t engage with it, that set is going to be a lot less successful, and their businesses are going to be a lot less successful compared with the leaders that actually engage with AI. It’s really hitting this point in time where it’s going viral. Literally, it is not the technology. It’s not something you just brush off to the IT department. As a leader, you have to engage with it. You have to understand what it is. You have to understand what it’s capable of doing. The last couple of years, I think leaders have been probably doing a fair degree of learning. You’ve heard the terms proof of concept and pilots used probably too much, but a lot of that’s been going on, and I think we’re about to change from that environment into one of more prime time, where we’re looking at what is AI capable of delivering for our business, or what experience is AI capable of enabling for our business, our customers, etc. And I think it’s becoming that broader topic for what at emphasis we call enterprise AI, where you’re looking at really delivering meaningful business outcomes that are enabled by AI. So I think that’s another key learning. First one get involved, because you’re going to fall behind if you don’t. Another one is it is an enabler, and it’s the key is really trying to figure out how you can use AI to enable the delivery of experiences, the delivery efficiency, et cetera, to your clients, to your vendors, to your employees, to your team. I would say it has become a very embracing technology in that it is for everybody. This is not an IT thing. You can look at things like Gemini, which is now available. Obviously, a lot of the Google platforms, you’ve got grok, there’s these basically personal assistants that are AI driven, which enable you to become incredibly smart about pretty much anything very quickly indeed. You don’t have to be an AI architect or an AI whiz kid or whatever to do this. Get onto your phone, download Gemini, and start asking questions. Leaders not quite got there. I know that I’ve only recently, in recent months, started really using these personal assistants, but it’s amazing how much they’ve come along in the last year compared to where they were sort of 1218, months ago. If you don’t engage with these things and start using them, your colleagues and other businesses and your colleagues as other leaders will start accelerating away from you, because what AI is providing is just access to this huge data set in an extremely streamlined, efficient manner to allow you to basically rapidly navigate through the mundane and start really inquiring and engaging in topics that really matter for your business. So it’s not a technology like becoming repetitive, but everybody should be engaging with it and using it to both educate themselves, educate their teams, and becoming an evangelist of what it can do for your organizations, but also probably for the planet as well. We did a survey of a bunch of our clients earlier this year, and we found that if you look across the core of those clients, only about 16% of our clients have actually engaged in what we call proper change management and training for AI. And that’s too low a percentage that has got to accelerate fairly quickly, because otherwise you’re going to be left with a bunch of these organizations who have that epiphany, and they wake up and go, oh, we need to engage, but many of their competitors may have already engaged to be miles ahead of them. So we have to look at companies engaging a lot more broadly around the AI topic, and making sure that their organizations engage a lot more broadly.
Adam: Andrew, you shared a lot of great advice and a lot that I would love to dive into, but it really all starts with a key piece of advice that you shared: getting started. Start engaging. How?
Andrew: It is one where my kids talk of FOMO all the time. This is real. FOMO is real here, and you have to engage with these personal assistants. I use Gemini a lot. There’s perplexity. There’s a whole bunch of other ones. They all have a conversational front end, so you can ask them questions. You can put them into full conversational mode and go down any rabbit hole you want. And I think the first step is actually understanding how versatile this AI is in terms of what it knows, what it can play back to you, what format structure it can play it back to you, what it can filter out and what it can’t filter out, which is getting a lot less of a population compared to what it can you don’t need to be an engineer of any type. You don’t. You need to be technical if you want to code something in C++, and you’ve never coded anything before in your life with AI, you probably can. It’s a proper enabling technology, but you don’t need to be a technologist to use it. And I think for all business leaders, they need to take this ability to learn and get hands-on through a very intuitive interface, a conversational agent, and understand basically what it can do and what its boundaries are. And I think that is in the here and now, and you can get on with it tomorrow. It’s fully engaging, and it’s up to the individual and the leader to release their inhibitions and engage. It’s actually a lot cooler to have engaged with AI than just put it in this too hard or too technical box that’s for years ago, this is the here and now. Another thing which, as leaders, we need to be careful of. As leaders, we have a responsibility to address the whole ethical use, and that is, I think, a big topic which is hasn’t been fully bottomed out yet in terms of responsible AI and ethical AI, but it’s one we take very, very seriously in terms of looking at things like bias, data sets, fairness, represented in data sets, understanding transparency within data, understanding how data is used to make AI driven decisions, etc. But engaging in that topic as a leader, I think, is absolutely paramount. You can’t just go and play with it. It’s you’ve got to understand what the implications are of it. I think there’s a realization as leaders, and this is modern leadership versus leaders of 10 years ago, leaders have to understand that we are becoming a data-driven culture, and I think most people do now. People use the internet continually, which is obviously another data driven platform and a digital platform, but it’s the embedding a data driven culture with an organization has become extremely important, and it takes evangelism, I think, from the top and from the leadership there to encourage that culture to develop and develop in a responsible and ethical way. I think there’s a topic du jour. One of the things which came up all week at Milken was around upskilling workforces, and what is AI going to do to the workforce? And that is a very sensitive topic, but it’s one which I think is very easy to be afraid of through ignorance, rather than address it with being educated and understanding that the upskilling of a workforce so that you pay can become a lot more efficient and a lot more capable and spend a lot less time dealing with the mundane and a lot more time really bringing their skills to the forefront of a business is very real, and it can deliver huge advantages, not just to an organization, but to the workforce themselves. But the workforce has to take some responsibility as well to actually upskill themselves. It’s available today. You can get on to again one of these agents and learn about pretty much anything you want. I think, from an organization a leader standpoint, you have to encourage that people go out there and educate themselves and also provide them platforms to educate themselves around AI and provide them with the opportunity to upskill themselves, to take on different jobs, or different shape of jobs that may result as a result of what I’d call this AI revolution. And the other thing is cross-disciplinary collaboration. It isn’t just about a technology. This is about business. This is about ethics, as I just mentioned, it’s operations, it’s customer service, customer experience, and everybody’s got to work together. It goes back to that enterprise AI concept, as I mentioned just now. We look at it as emphasis. It’s understanding what are you trying to deliver, and what do you need to deliver it. And delivery today is not just go to the technology department and ask them to start coding so identify and explain what that experience is you’re looking to deliver, and then understanding what are the fundamental component parts of how to deliver it. And it’s a lot more than technology. It’s business processes, ethics, it’s experiences. Know how all the facets you have within a business need to be brought to bear on delivering that particular experience or problem. Very, very different. So going back to your question, I think the best way to stop educating yourselves and thinking about these things is get engaged and start asking an agent basically what you want to learn about and what you want to know. And if you get bored, you can ask them to parrot it back with a French accent, if that’s where you want to go. I’m serious. You can change the accents, you can change the genders, you can change the language. You can change whole bunch of things in terms of what’s parroted back to you from his learning experience. So it is. It’s quite interesting. If you’re on an aircraft, put headphones in, because not everybody wants to be educated on the same stuff you do.
Adam: What do you see as the biggest opportunities that leaders should be aware of as a result of this AI revolution, and how can they capitalize on them?
Andrew: It goes back to understanding sort of services AI, what does it mean? What does it do? Think the political order x, Google explained it very well in a conference I attended several years ago where he said, it’s like you have a volume button, and if you’re operating with standard mental capacity of a human you’re at about one. He said, AI allows you to turn that volume up to like 100 and allows you to process far more, do far more in a far shorter period of time, interrogate far more. And I use it myself. It’s like turning up the volume button, but way beyond the 10 that you think are usual button. Go to it’s under even 1000 it’s that enabling of being able to do an enormous amount more across a far greater data set in a far shorter period of time is really what it’s about. Let me give you a couple of use cases. So and these are clients. I won’t mention the clients, but if you look at commercial credit management for banks, so typically, you’d have a credit manager who is looking across a portfolio of businesses, and he’s looking against things which would affect their credit rating. Are they paying their bills on time and their balance sheets? Can they support the working capital requirement of all these different aspects of a business that they’re looking at? And they gather all this data, and they put it all into report, and that report takes a lot of time to put together, and maybe once a quarter, they’ll sit down with the business in question to review where their standing is with the bank, how they’re performing. And so the bank can then assess credit risk of extending whatever they’re going to extend them in the future. Slightly time-consuming, takes a long period of time to do. And so with some of the banks who have actually brought AI to that problem, and what AI is doing is it’s now interrogating all the data sets that they used to interrogate, and interrogating a load more data sets that they didn’t interrogate. Took too much time, interrogating all those data points, consolidating together or producing the credit report for that company within minutes, not months, minutes. And what’s happened is that the credit manager is now, instead of spending most of the time gathering data and writing a report, is actually consuming what the report is saying and having a proper credit discussion with the customer. And not only that, can now deal with a load more customers, because it’s not taking him or her a quarter or months to put together the data in the first place. So that’s enabling AI. Now go back to that analogy with the volume button. That person is turning the volume up to about 1520 so then you take an auditor, one of the big form, and you look at what they do within an audit. And the first thing you do within an audit of a business is you’re gathering data from all sorts of different parts of the business and all sorts of different functions. Then you start aggregating that data to start matching it, saying, all right, if a customer bought that particular widget from me on that particular date, did I invoice them on time? Did they pay it on time? Was inventory relieved on appropriately so that if we bought one, we relieved one unit of inventory from our stock, did our balance code? All the things auditors look at to go, Yep, all their business processes are working, but an audit takes a lot of time upfront to gather all that data. There’s all sorts of questionnaires go out to managers and leaders of the business and to give them access to this, that, and the other, and provide reports this, that and the other, with AI and actually, agentically, I all that front end loading of an audit is automated. We gather all the data, we do all the three-way matching, and so when the auditor actually shows up. They’re actually doing their job as the accountant, the auditor really testing not mundane processes or mundane outcomes, but some of the more tricky ones, the ones which don’t appear totally straightforward. And so you’ve turned this auditor into what I’d call a super auditor, but there you’re turning the volume button up way beyond 20. You’re probably into the 30s and 40s. Now driverless cars experienced this week. That’s falling about 50 or 60. And many people at the conference were saying, if you have the opportunity to get one of these things, do it. They do work. If they didn’t work, they all over the press, and they wouldn’t be allowed to drive around the streets of Los Angeles to see it in action. That is, in my book, about 50 and what’s coming at us after that. I mean, you look at r, d, and things like sciences, you look at basically the aggregation of personal health data in health care, and having the ability for doctors to have full access to your whole breakup of your health care stats and to be DNA, etc, and so they can deliver a more tailored product, or more tailored diagnosis or a more tailored remedy to whatever it is you’re looking for from a health care stat. Healthcare standpoint, there’s higher education, education in general, where we’re now working with institutions to actually identify where potential students or potential faculty could be disengaging for whatever reason, and there are signs that people disengage. So if you’re on an online course, for instance, and you’re going through. And if you’re like me, you’ll get to the points where you’re concentrating quite hard, and there’s points where you’re not concentrating quite hard. If you’ve got an interactive course, we can tell whether you’re concentrating or not. And so we can then start, in real time, manipulating content, so it actually reengages you in that content, so that you have a higher degree of absorption of that content. And therefore, you learn there’s things like that going on. There’s optimization algorithms at play in supply chain and logistics, which is a huge point where we are in the world at the moment. So that as events happen in one part of the world, for a company that has a supply chain spread out around the world for a whole variety of reasons, they need to be able to adapt pretty quickly, but they also need to identify that something’s just happened to force them to adapt pretty quickly. And when you have a human in the loop, that recognition of that requirement may not be quite as speedy as putting an AI agent in the loop, whether they’re on immediately, and not only they on it, they’ve come up with four revival however, many different alternatives of where we can reroute shipping lanes or reroute logistics. Trucks or whatever, in order to make sure we have the best opportunity of meeting our service level agreements with particular clients or whatever, or optimizing the most effective way to orders or supply inventory from various parts of the world, given that we’ve had this event in another part of the world, and all that is becoming very real time, which is amazing. And then you look at things like in terms of financial services and tools are being used to sort of basically manage risk, detect fraud, ensure compliance, all these things which have taken a lot of time in the profession for many years, we’re now seeing AI basically come into play and bring it just an extraordinary level of efficiency, an extraordinary level of reach across data sets and an unbelievable capacity to process this information and present very real, intuitive information in a very short period of time, and that, I think again, is going to change. But the opportunities in AI, I think, at the moment, are significant. Jensen actually was asked that question, and he basically came back and said, Look, I think there are opportunities you’re going to start seeing around healthcare, which I mentioned Life Sciences and the development of drugs. There was another one. He said, You’re going to start seeing it in manufacturing with physical AI, which isn’t quite there yet, but it’s coming. That’s where you get into manufacturing plants becoming parts which are run by robots, building robots to build products, which is a bit mind-blowing, but that’s coming at us. That wouldn’t be anywhere near possible with that AI. And in that particular case, physical AI, there’s a lot of different use cases out there, and it’s not just one industry or another, but there are some industries which being impacted a lot more meaningfully at the moment compared to others. And so if you look at give an example, manufacturing, as just mentioned, not quite there yet, because physical AI isn’t quite there yet. AI is bringing a lot, but it’s going to deliver a lot more. Travel and hospitality is getting interesting. I think I asked my assistant today to monitor the BA flight to London this evening, and so I get updates now every hour, just saying, Yep, it’s on time, still on time, still on time. And I’m a bit nerdy like that, but OCD wants to get to travel, but it’s beginning to make a self present. It’s not quite there where you want to find me the cheapest ticket between A and B, that’s on this particular set of times in this particular day. Not quite there yet, but it’s definitely making progress.
Adam: You mentioned right off the bat that AI won’t replace leaders, but will instead augment leaders, and it’s an important point for any leader to understand. With that in mind, what should leaders understand about the role that AI will play on hiring and on workforce planning?
Andrew: If you look at AI in those two contexts, I think it is very much about augmentation and it’s about enabling. So it’s being able to interrogate the far greater data set and then make sensible decisions based on that interrogation of that data set, a lot quicker than you used to. So in recruiting, you can go find something a lot more specific in terms of what you’re looking for a lot quicker. And in fact, I haven’t spoken to many search firms in recent times, but I’m sure that that world is changing dramatically. It’s just one of many worlds that I personally haven’t got time to dig into. But in workforce management, I think there are a whole variety of things going on, from IoT in the workplace, Internet of Things, and being able to identify when people are engaged, where they’re at? Are they busy? Is the coffee machine working? All these sorts of things which you can get into from a workforce standpoint that are beginning to come into the consideration. So historically, you just didn’t have access to that type of data. Now you do, and depending on what you want to do with it, the chances are that the technology is there to do it for you. But it may be quite tempting to go, Well, this is great. So I’m going to figure out how many times Johnny has had a coffee this week. And so I either need to reduce the amount of caffeine in the coffee we have in the kitchen, or we need to up it, whatever it may be. You can go down a rabbit hole pretty quickly in these things, where you’ll just be implementing AI because it’s cool, or it feels interesting, or it feels as though it’s got an experience that may be of value to a workforce, or maybe not. And I think one of the keys I speak with a lot of executives about this is that we’ve moved out of that proof of concept pilot period that I was talking about earlier. We’ve moved into this enterprise era. And you have to, I believe, align AI with what you’re trying to achieve as a business. So aligning it to business objectives. Don’t do it just because it’s cool. You need to understand what is the outcome you’re trying to get at that may be enabled by AI. So yes, it may be interesting to figure out how many coffees Johnny is drinking on a daily basis, but why do you need that information? We could get it for you, but what value is that going to give you as a business? And unless you can answer that question, then why even measuring how many coffee cups John is piling up in his queue? The two things is, yes, AI is an augmenter. It is an enabler, and it may be tempting to do a whole bunch of things with it, but make sure that they’re aligned with your business objectives. And then, on the flip side, you’ve got to make sure that you’ve got that transparency and explainability and what potential AI decisions are being taken as a result of an implementation. You’ve got to be aware and react. To the limitations of AI where you talk about hallucinations, or we used to talk about hallucinations a lot a couple of years ago, less so nowadays, but it’s still unpredictable, and there are still situations like my driverless car coming back from dinner last night, which didn’t decide to tie it into the hotel because there was a bus in the way. Probably could have got around the bus, but it assessed it pretty quickly and took a decision as a result, the right one, I think. But if there is these novel scenarios that AI has to adapt to, and then the other one is that don’t use AI just to avoid the need for you to do hard work. Use AI to do the hard work you’re currently doing so you can do better and higher-value work, because AI is doing the hard work upfront. And put another way, don’t use AI to make you lazy. That is a bad idea. If you do that, then going back to your first question, yeah, you may be exposed in terms of whether you’re going to have a job in the future. Use AI to enhance what you’re doing. Use AI to augment what you’re doing and enable what you’re doing. And I think you’ll be taking a pride approach.
Adam: What are the most important skills that leaders need to have in the age of AI, and why?
Andrew: Recognizing it’s not a technology that’s not a skill, but opening your mind and engaging with the whole topic, I think, finding the right balance between what AI is capable of and what the human brings to the table. And so I think that is actually pretty key at the moment, and it’s extremely helpful in trying to find to articulate the whole upskilling of resources, rather than the replacement of resources. The upskilling literacy across departments, I think, is really important. That needs evangelism, which is a different skill. I mean, leaders tend to be evangelists on certain topics, growth, revenue, growth margin, but we haven’t been leaders on technologies, per se, in history, and now we do need to be evangelists. We need to be out there encouraging the organization to really engage with this topic. So that is a different skill. It’s using our evangelist skills in a different way or to a different target. I think we need to look beyond traditional education models. I think that is an area where we need to open our mind and look at different ways to solve certain problems. We need to stay very much focused on trust and transparency in the ethical side and AI responsibility around AI. I think that is the skill that physio leaders. We haven’t really had before. We haven’t had to engage before, but now we absolutely do otherwise. Things could go pear shaped quickly. I think we need to start thinking around AI governance and what does that look like, and actually proactively thinking about what that looks like, rather than reactively. Because if you’re doing it reactively, it’s probably too late, just because of the speed at which these things happen. So you need to address that and put in place clear AI governance and ethical guidelines right up front, rather than post fact. And I think for me, on the biggest skills in this new era with AI is for leaders really to as these evangelists drive cultural change within the business. It is cool to engage with AI. In the old days, you have people looking at it’s not so cool to engage with technology or it. And actually, if you don’t engage with AI now, you’re not going to look very cool. It really is a sea change, I think, particularly as leaders, and unfortunately, as a leader, I’m probably a few years on, certainly the more junior leaders there. And it’s one of the things that in our generation on our generation, there are a lot of things we didn’t look at from a technology standpoint, because it’s just the it guides, let them go deal with it. No, you need to deal with it. And if you’re going to be a credible leader, with this younger generation coming up as the next leaders, or the future leaders of the entity or the enterprise. You need to engage with them. That requires engagement in AI, but it’s also embedding AI and everything it can do within a seed cultural change within the business. So people think along those lines, rather than just let the IT guys figure it out, they’ll come back to us with the answers, that is leadership from the front. People ask me, Why do you spend so much time reading things on planes, about AI and stuff, engaging if you don’t exhibit that leadership right up front as a leader and show people that this isn’t a thing from the tech side. It’s for the business and as a business leader, yes, it’s really important. Why are they going to follow you? Why are they going to embrace that type of thinking? So thinking? So it’s leadership from the front, and I think we have a big responsibility to drive that cultural change within the business, to adapt AI or adopt AI, because it is cool. In fact, if you don’t adopt AI, that’s not cool.
Adam: So much great advice. It really starts with leading by example, and the very best leaders are flexible, are adaptive, and recognize that change is inevitable. And right now, we’re in a moment where the only certainty is uncertainty, and you have to be ready to pivot quickly.
Andrew: 100% Yeah. And it’s tough because it is moving so fast. Look at the private equity community. And the private equity community, historically, is a whole bunch of funds they need to invest over a period of time, and they go out and buy companies to which they think they can transform within the horizon of a fund, and then sell. It’s obviously a big increase in valuation. And value, and historically, that’s, I wouldn’t say it’s been easy. I’m certainly going to belittle the industry, but looking at companies that you think there’s a transformative opportunity, and you buy them and you execute that transformative opportunity, and then sell them at that bigger number. Today, you may be buying a services company, which the old days would have been fairly easy to understand the transformation journey. But today, you’ve got to ask a question: what is AI going to do to that services business in the next two or three years? In fact, is that a services business even going to exist in two or three years time because of what AI can bring to the table? And so as finances, particularly private equity, you’re looking at what assets is there an opportunity to bring transformative value? But then you’ve got to, as a financier, understand what sort of threat AI is to that asset or that particular industry or whatever, in order to make an informed judgment about that whether you buy that company or not, and net of the impact that AI is going to have on that company, is it going to allow you to command the valuation you need to actually turn a profit in that particular investment? Those are questions that the P world wasn’t asking even a couple of years ago. It is bringing change, and it is bringing broad change, and it is engaging a bunch of stakeholders, business leaders that typically weren’t associated with this technology thing, why there’s not technology? It’s an enabling entity that we all need to engage in, and engage in, proactively and demonstrate leadership in that engagement.
Adam: Is there anything else that leaders should know about AI?
Andrew: The pace of what’s going on here is somewhat unprecedented, and so it’s quite hard to look over the horizon and figure out what to worry about. But I think there are some core things which I’m thinking about. One is AI is dependent on data. It grabs all this data out there to process and then present back information to you. And the issue we’re facing, I think, in broad terms, is some of that data may actually be someone’s intellectual property, or it may be copyrighted by some entity. And AI, if you let it doesn’t really care about that, it just grabs what it needs to come up with the answers you’re seeking, and rides roughshod over that copyright protection, or sort of IP infringement or whatever. And I think that’s going to become something that people are going to have to think about. And I’m sorry it’s a bit dull, but we’re going to have to think about it and understand what the implications are of, how do you protect copyright and an entity’s intellectual property in this world of AI, on the flip side of that, I think people and organizations are going to have to become very aware and very sensitive to something called AI exhaust. So if you are looking to summarize a document, or someone sent you a document and you want it summarized in five pages versus the 100 that you’ve got or a legal contract, what are the things I need to worry about, very tempting to put that contract up on an AI model and ask it to summarize it in five pages and read it back to you with a French accent. That’s great, and it will do that. The issue is, is that that document is now in the domain of wherever that model is. If it’s all private, then you can keep the boundaries around it. But if you’ve got an employee who’s actually put that document out on a public model. That’s not so good, because yes, the employee will get the answer he or she is looking for, but that sensitive document, or that might be, is now out in the public domain, and that is AI exhaust. So yes, you’ve gone through and got what you wanted, but it’s left this exhaust trail of stuff which you may not necessarily want in wherever you’ve put it in the first place. So that’s a big consideration that I’d want to think about, is quantum. Is it two years out, 10 years out? I don’t know, but it’s definitely getting closer than it used to, and I think it’s something that is going to enable, or is going to drive another wave of dramatic change, even with AI in terms of speeding it up and actually giving it the ability to process even more than it’s capable of doing today. And then the fourth thing I’d pay attention to is this world of physical AI. It’s when AI joins up with the physical world. And I think, as Jensen was alluding to with manufacturing, if the future you have robots designing robots to building robots to actually build whatever it is. And that’s quite interesting. One other is the whole thing around emotionally intelligent, AI. I mean, that’s you could say is coming on. If you listen to these personal assistants, they can put emotion into their voice. I don’t think it’s emotional intelligent AI. I think it’s just them speaking with an emotional twinge or twang. But I think AI will become, or have the ability to become emotional and a lot more human, and be able to resolve for itself. And you heard me mentioned agentic earlier on. I mean, agentic AI is where we’re actually moving away from just having a, what I say is a fairly single threaded interaction through a conversational agent with a model, to actually having an agent that’s navigating the model and defining its own sub goals and actually planning and executing against those goals and learning from the experience as a result of that execution and then adapting it. I mean, blow your mind type stuff, and that’s very real today. And I think agentic AI is going to come a lot more prevalent over the very short term. And some of the use cases I alluded to earlier on are driven by agentic AI, where you’ve got aI basically running AI. That’s quite an interesting concept as well, but it’s coming back to the responsible AI and the ethical responsibility around this very cool technology is huge, and it’s incumbent on us as leaders to make sure we understand what our responsibilities are and how we can take action with those responsibilities, because if we don’t, the outcomes are not going to be good.