Episode 78: Celebrating 180 Years Of Innovation with IBM

What Does The Future Of Data Hold?

With these long-lived companies there’s obviously a culture of change that they embrace, there’s a culture of innovation that runs through everything that they do and that's what allows them to keep growing with the times and adapting and changing.
 

Tune in as we hear from Inderpal Bhandari, Global Chief Data Officer at IBM, as he looks back on how far data has come and shares his view on the future of data, including AI, regulation and the pandemic’s impact in celebration of Dun & Bradstreet’s 180 year anniversary.

Listen or click on 'subscribe' below for links to subscribe directly via your favourite podcast platform, including Apple, Google and Spotify.

(Please note that this podcast was recorded remotely.)

 

 

If you’d like to join us on the Power of Data podcast or have suggestions for potential guests, please get in touch from the button below.

Get in Touch

Read full transcript

The Power of Data Podcast

Episode 78: Celebrating 180 Years Of Innovation with IBM

Guest: Inderpal Bhandari, Global Chief Data Officer, IBM
Interviewer: Anthony Scriffignano, SVP and Chief Data Scientist, Dun & Bradstreet

Anthony Scriffignano 00:00
Greetings and welcome to The Power of Data Podcast. I'm Anthony Scriffignano SVP and Chief Data Scientist at Dun & Bradstreet, and I'm delighted to be back to the podcast today. I'm joined by Inderpal Bhandari, who is the Global Chief Data Officer at IBM and a very special individual in many, many ways. I'm deeply looking forward to our conversation today. Welcome Inderpal. How are you doing today?

Inderpal Bhandari 00:22
I'm great Anthony. Thank you. Thank you for the invitation. I'm looking forward to it as well.

Anthony Scriffignano 00:28
We, of course, know each other well, and we've worked together in the past, and I consider you a very trusted colleague. Before we get into the conversation that we're gonna have today, could you just give our audience a bit of an introduction about your career and how you came to be doing what you're doing today and what it is you do today?

Inderpal Bhandari 00:44
Sure, chronologically, my first job after finishing my PhD at Carnegie Mellon University was at IBM, Thomas J. Watson research. And I was there for seven years, that's where I got into data mining. And then I left and ran my own company for about 10 years, specifically, data mining applied to professional sports and call centers. And then after exiting from that effort, I became the first Chief Data Officer in healthcare in 2006. At that point, there were only four of us globally who kind of had that title. But then the profession started to grow and now there are literally 1000s of us. And I was fortunate because I could then grow with the profession. And as it turns out, you know, of those original four I’m the last one standing, which I don't know if it's good or bad, but it certainly has allowed me to work the craft, learn the craft and script the craft, right, I kind of know exactly what I'm going to do stepping into that role. The latest go around of those four is with IBM around the Global Chief Data Officer, and my main function here since 2015, has been to first define and then implement the data strategy for all of IBM. And what it eventually came down to is that IBM's business strategy was all around cloud and AI. So, we decided to internalize that to make IBM itself into an AI enterprise, put artificial intelligence in all our major workflows. And that that would become our transformation as well as it will become the showcase for our clients and customers. And that's been my effort over the last four years or so.

Anthony Scriffignano 02:24
Thank you. This year Dun & Bradstreet is celebrating its 180th anniversary as a company. And I've done a lot of thinking about what that means in terms of what was going on in the 1840s and the 1850s, you know, civil war and, and westward expansion, and all these things going on. And the company was sort of growing. And IBM, I don't know the exact number, but it's well over 100 years as well, I think 110ish. So definitely, in that rare air, there's very, very few companies in the United States, for sure, that have been around for that long. And one of the hallmarks of these companies is their ability to sort of change and pivot, as change happens around them - civil wars, World Wars, evolutions in technology, all of these things have made companies and killed companies. And here we are, you know, with this heartbeat dealing with all of this change. And you just mentioned a transformation in the posturing and thinking of IBM, I was wondering if you could reflect a bit on data as being sort of the common theme there. If I think about where we started, it was - Tell me about this company, I need to do business with that. I can't get on a horse and go visit, right? In your world, it might have more to do with the Census, or it might have more to do with companies becoming overwhelmed by the computational complexity of what they need to do in a day. But there's data. And now we sit in a world where data - I'm not sure I love this data is like oil thing - Data is like data, right? Data begets data, it grows, it isn't consumed when you use it. It's just something that's there. And it's at the heart of what we do. So, I was wondering if you could reflect on sort of the journey of IBM and how data relates to that in your world?

Inderpal Bhandari 03:59
Yeah. So, you know, in IBM’s case, you're right, the Census was one of the key projects dating back all the way to 1928 or so with the punch card machines. And they began to in fact dominate the computing world, right? So, but even you know, before that, historically, the ancient Greeks, they had the idea of a machine that could think, and so in some sense, that thread has permeated all the way to today. And the journey for IBM started, I would say, seriously 1928, with punch card machines, and then the recognition that they could be used to manage these large projects, like the Census and so forth. But to your earlier point, the whole thing with these long-lived companies is there's obviously a culture of change that they embrace, there’s a culture of innovation that runs through everything that they do. And that's what kind of allows them to keep growing with the time and adapting and changing with the time. And so, in the case IBM when I think about it, you know, the punch card machine, then there was the magnetic tape storage, there were the computers for space. But you know, manning the Apollo mission on the moon, I mean, that was all the computing was IBM. The first checkers program that actually learned from experience, that kind of the progression that these computers that can think or exhibit intelligent behavior, the first PC, personal computer, I think that was another pivotal moment in the early 80s, in the transformation of the company. Deep Blue, the chess playing computer that ended up beating Kasparov, who was the reigning chess champion at that time. And then Jeopardy, which kind of showed that a computer could actually exhibit intelligent behavior in a free-flowing quiz program like that. And then most recently, around IBM Q, our quantum computer, essentially, presaging a whole new age of computing. Obviously, computers and computing and data, they just go hand in hand. They have to learn from experience, well, experience is just data, right? It's kind of what's actually happened. And then back to the computer. So, it's critical, it's one of those foundational pieces of computing. And so that's been at the heart of everything that we do. But you can see the thread of these iconic moments. And you know, these are significant transformations quite different from the past. That's, I think, what makes these companies long lived and great.

Anthony Scriffignano 06:27
For sure, you brought up sort of by passing reference there, you know, Antikythera, and then you know, we get to like the Babbage Difference Engine, and we get to, you know, I wasn't around for any of those, but I can sure remember, the first time I opened my first IBM PC. I can remember what the box smelled like, as I was, you know, cutting it open. I can remember the feel of that metal, you know, these are very visceral things. We remember these moments in our life when these significant things happen. And being a nerd, you know, I remember my first computer experiences very well. In our world, at least in the history of our respective lifetimes, the heart of that world is a digital thing, a zero or a one. And now you bring up IBM Q, and now the heart of our world will become these little buckets of infinity. These cubits that, you know, have superposition and entanglement and all these weird properties that we're only beginning to completely understand what the implications of all that will be. And it's hard to avoid wondering how antiquated our understanding of computational complexity and making things computable will be in just a few decades, this will probably be a very ancient discussion that we're having right now. Data will have a completely different meaning by then. And that will happen in a very short period of time, not in 1000s of years, right?

Inderpal Bhandari 07:44
Yes, you're absolutely right, that quantum computing is, is one of those transformational events. Prior to quantum computing zeros and ones. And now, of course, I mean, even quantum computing as the super impulse states, zero and one together, right, at the same time, and these cubits represent that whole complexity of that state, which essentially, is not really resolved until you go out and measure something really significant right at the end. But by that time, you could have addressed some very, very computational problems in the process. That's the promise, right? That you can actually then solve problems that are not quite solvable today. Now, there are ways to go with all that. But like you point out, we recognize that this is transformational. I think also, that's why our effort with IBM Q has been to democratize it. I mean, it's being made available very widely to universities across the world, you know, to other companies. And we say get started early thinking about the whole ecosystem, because we recognize that this is going to be a truly societally transformational event, that's going to really change society. And so, society should have a hand in shaping it and figuring out how to go about doing it. So that's why we're encouraging everybody to get in on it early, as opposed to wait for the finished product.

Anthony Scriffignano 09:00
And to your credit, I don't know how you can make it any easier to invite people in. You can only bring people in, you can't make them do it. But you know, if they just stick a toe in that water there, there's plenty to learn about. And there's certainly an opportunity to see something in its literal creation right now, such an exciting time. One of the measures, you know, I'm pretty serious about empirical rigor, and making sure that we not only stand on the shoulders of giants but inform future research in our practice. You mentioned your PhD work, and certainly, when I did my dissertation, I was all about trying to create methods that other people could make much better, right? Because that's what it's all about. At the heart of that there's two aspects of data that I often think about, which is what is the character and quality of data. And there's so much behind that, right? We make decisions with the wrong data. We have confirmation bias. There's so much data out there now, that if you go into any corpus of data with a preconceived notion, you'll find something that proves you right, but it doesn't really prove you right. It just supports your crazy idea, right? So, we have to have empirical rigor, we have to be able to really harness the character and quality of the data that's available. But it's changing as we speak. During my articulation of that sentence, I don't want to think about how many pieces of information both of our respective enterprises not only curated but created and probably rejected and transformed in the middle of this question, right? How does it change how we make decisions? There's so much information, and it's changing so fast? How do we put a finger down on it long enough to make a decision with it now? Isn't that becoming an overwhelming problem?

Inderpal Bhandari 10:36
Yeah, no, I think so. If we look at it as humans, that's certainly true, right? I mean, I like to use the example of how medical knowledge keeps doubling, you know, very, very quickly, and, and the doctor has no time because they got to spend like 160 hours per week, just to keep up with journals, leaves no time, right? So as humans, certainly, that's true. I mean, we if we just let that run, it's going to be overwhelming. But that's also where AI and computing and data, where that becomes relevant, because it's really the only way to harness that data, the experience that underlies that data collection, and everything else, you know, the actual physical experience, what's actually happening, and learn from it and keep pace with it, and then evolve and grow with it. And these modern computers and methods are very, very powerful, and very, very good at doing that. That's one thread, you know, you're going to use automation, computing, artificial intelligence to keep pace with the explosion of data. Now, having said all that, the other thread, if you look at what we've been through with the pandemic, when the pandemic hit, I know that there was a moment there, where we had a recognition that all the models that we had in place, were not really going to work. And we pivoted our efforts to essentially pumping out data as quickly as possible, you know, as it was being collected about what was happening on the ground, to our various business functions, so that the people who were responsible for those functions could make their judgments and make the best decisions that they could. You know so, which suppliers were going to become at risk, because of where you had the COVID-19 rates exploding, which customers were going to be affected, which location, etc. So, the models, the predictions, and so forth, it was a brand-new situation, they'd still would have to learn what needed to be done in a situation like that. But if you had trusted data, very reliable data, then the humans who were part of the process could still make judgments very quickly, provided you've got them the data very quickly. So, I think that's the other trend that you've got to be able to very, very quickly in real time or close to real time, provide highly accurate, trusted, very reliable, robust data.

Anthony Scriffignano 12:51
There's so many things that you just said that I'd love to unpack. One of the things that resonated was, you said when COVID hit. And it really wasn't one thing that happened in one place at one time, it was initially one thing happening in many places at different times. And so, what we saw in the data was this sort of hyper disruption, where we have to remember that there were a lot of disruptive things going on before COVID right? There was social unrest, there was an election cycle going on, there were trade deals being negotiated around the world, all of these things, were causing disruption, here comes COVID and as it happens in different parts of the world, different parties that are being disrupted, are responding to that disruption and they are disrupting each other by reacting to the disruption, I call it hyper disruption. On top of that, you have a very dangerous situation where the rate of change in the environment was faster than the rate of change in the data that describes the environment because of the latency in the type of data that we look at in the commercial world for sure. And then we had companies that used to make scuba gear making respirators and companies that used to make blankets making face masks. So even trying to understand what the parties were that were being disrupted, was massively complex. So, you talked about getting the data to the people, people will always have a place in this world. You know, artificial intelligence isn't something new, it's been around for a long time. What's new now is that we have enough compute power and enough data to start doing some of these things that were theoretically impossible a while ago. But the reality is that doesn't sort of disintermediate humans, it allows humans to use that amazing human brain they have in situations where there's no intelligence and artificial intelligence, right? If the fundamental mathematical assumptions are invalid, the future does not look reasonably like the past. The data is not sufficient to project, correlation is not causation. When those things start to overwhelm the decision, we need our human brains, right? And that's, I think, where we need people like you.

Inderpal Bhandari 14:47
That's exactly right – I don’t know if it’s people like myself - but we can all refer to AI as augmented intelligence.

Anthony Scriffignano 14:51
Yeah.

Inderpal Bhandari 14:52
We've been doing that, you know, well before COVID hit and so forth, because we've always found that for critical business decisions, there’s going to be humans in the loop, and that's really the way that we need to go. Now there are obviously certain applications, you know, algorithmic trading being one where there's just, you're going to make decisions very, very quickly. Because that's how you're trying to benefit from the technology just by being able to respond so quickly that we'll be able to make some money. Those kinds of situations, you could run fully automated. But if you think from an enterprise context, it's going to be augmented intelligence. And I think the situations like the COVID-19 situation just completely accentuated that because then you really saw that picture emerge so clearly that well, what we're doing is not going to work, you need a fallback, the fallback is going to be the human in the room. And what do you need to focus on is getting them as quickly as you can the data about what's happening on the ground. And I think that could in fact, be a model for how people think about AI down the road as well. That could be the AI part of the workflow, that's one path. But then there's always that fallback path if things go really haywire. And it's not a seamless way of kind of going from one to the other, when you do address a lot of the risks that people associate in there.

Anthony Scriffignano 16:09
I think a lot of the common understanding of AI comes from science fiction, it comes from reading, you know, we use these metaphors, the reality is often very different. Sometimes the reality is informed by the movies and so forth. And for sure, some of the first principles that have occurred from science fiction, about robotics, about understanding the unintended impacts are there, there are apocryphal tales that we can imagine where we give up a little bit too much control, and then unexpected things happen at a scale that is difficult to manage. So that brings up a dialog right now going on all over the world around authenticity and trust, trustworthy AI, ethics, ethical use of data, provenance, permissible use, all of these sorts of very fluffy, squishy terms that are so important. And regulators trying to come to grips with that. I had a conversation with a regulator this week, where I said that innovation will almost always outpace regulation, it was like I threw oil on a fire. Regulation can’t be written in a broad enough way. And I don't know, I really don't know. I think there's always going to be this two steps forward, one step back, three steps forward kind of thing. So how do you feel about that? These sort of inconvenient human things that we have to worry about in this cognitive AI world where we're trying to converge on advice with that the users are more likely to take but the users are humans, and they're fickle, and they're not always honest. And they don't always know what they want and they change their minds. Isn't that a big problem?

Inderpal Bhandari 17:40
Yeah, no. So, the way I like to look at it with regard to regulation is the innovation is always going to outstrip the regulation, there's no question. And also, the way you and I work with AI or even analytics, going in, you don't really know what you're gonna discover. The classic example is of that retail chain about sending the flyer offering prenatal vitamins to a minor, because they were just, you know, they didn't know what they were going to discover. And then what they ended up doing was completely violating the privacy of the individual, which is why you kind of need regulations. But the innovation is always going to outstrip the regulation, which is another reason why you have to have the human in the loop eventually, in my mind, right, that's the fallback position that people have to appreciate. And that this just drives our whole augmented intelligence approach. If you think about what we do, right, you've got data, you've got models, you've got processes and then at the end of those workflows and the processes, you end up doing something, decisions are made, you know, through the course of the workflow they made. You kind of need to have trust in all three of those, and the process piece is where you’ve also got the human in the loop. So, you've got that fallback position there, these things have to be set up instrumented, automated, however you want to call it. But essentially, it's got to be seamless, that flow, that interaction. So, in terms of being able to trust the data and the model, that's when you get into things like explainability, transparency, and all that other stuff. I like to look at the regulations and the regulators, in fact, as potentially something that can also spur innovation, because if you are faced with presented with a problem, because of a regulation, so for instance, right, we know that we have to treat data differently if you're sitting in Germany versus the US versus China versus India. But then if you want to do that and still work the economies of scale, how are you going to get that consistency in performance where you need it and also deviate when you can't? There's a lot of innovation that's being driven by having those regulations in place. So, I think that's another thing that we shouldn't discount that, that the regulations actually help spur innovation as well. But eventually, like you said, the innovation is going to outstrip the regulations. The discovery is going to outstrip the regulations. And so, you have to come back with a very trusted setup along, data models, processes. And the one thing I'll add to when you were going through the elements of trust, Anthony, you talked about, you know, provenance, sources of data, etc., etc. The one thing I'll add is an element of robustness. You know, along with the transparency, fairness, all that stuff, I think robustness is critical, because robustness in my mind translates to being resilient to adversarial action. And so these systems as you set them up, especially in today's world, we've got to be very cognizant of that, that there are adversarial factors at play, and whether they be, you know, industrial actors or state actors, or just individual actors or criminal actors, but they are at play, and they will take advantage of what you do. So, when we think about these trusted setups, that's going to be part of the equation as well.

Anthony Scriffignano 20:54
For sure. Since you bring up malfeasance, a topic I love to talk about and I know we've talked about it in the past, one of the biggest problems with something like AI in the context of people behaving badly, or contemplating the action of behaving badly and mimicking that in their actions is that the best malefactors when they think they're being watched, they'll change their behavior. So, if we just try to model well model, how the best ones are no longer behaving, there's observer effects, right? It's a great problem for cognitive AI, right? Because we can converge on the emerging behavior. And it's not a Bayesian thing. We don't have to wait for the action, we can sort of stipulate different potential future actions and then estimate conditional probabilities. If I keep talking, we'll put our audience to sleep. But I think there's a fascinating opportunity here to say that not everyone is going to observe the regulations, and not all use of data and technology is for good. And so, there's a burning platform here that we as hopefully beneficial practitioners of technology, do the best we can to continue to challenge our skills and our capabilities, and the data we use and the explainability of our methods and so forth, because others won't. And this is not a zero-sum game, right? So, we could become disintermediated very quickly. An organization like IBM, if I think about the number of layers of things to worry about in terms of data being adversarially manipulated, poisoning the milk, if you will, and affecting decisions, you could make a mistake and have that perpetuated all over the world before you can take your next breath. Right? So, there's resiliency in there, there's a concept of decision elasticity, how wrong do I need to be, and I still can make the right decision, right? This is not a black and white kind of thing we're talking about here, there's a compelling call to action that bad guys are gonna do bad stuff in a smarter and smarter way. And the regulator's and the good guys need to move forward too. There's no guarantee that things get better, right?

Inderpal Bhandari 22:53
Yeah, you're absolutely right, that there's a race there. And you know, you've got to keep pace for that race. I think the other aspect though, and you've touched on all this is this, there's a tremendous complexity, when that gets introduced, right? If you think about trust, I mean, we've summed it up in that one simple word. But underneath that we've got things about provenance of data, sources of data, and then we get into the more analytic side of things and AI side of things with fairness and bias, and then you know, robustness, and so forth. And that's an issue with regard to if you've got enterprises, and they want to set up so that they're actually able to move forward with the trusted data, trusted model trusted processes. There's a lot of complexity that they have to address. And we see that though, as an opportunity as a technology company. And you know, so the reason for some of our products on the data and AI side is exactly that just kind of let's bring this together. And you know, we may not get it fully right, right now, like there's a cloud pack for data, for instance, we've got, you know, but the vision for that product is in one place you're going to be able to address all this complexity. You know, we may not have it all done right now, but in a few years, yeah. I mean, once you put it together, you start building on it. That becomes your platform as you move forward for the AI world for the enterprises, right? So, there is an opportunity that I can point out, there's tremendous model work and complexity that enterprises have to face up to. And I think that's one of the reasons why the pace of adoption of AI is relatively slow. You know, in an enterprise wide way, it's relatively slow, you see pockets of it.

Anthony Scriffignano 24:25
Yeah, I think we're going to need some new nouns and verbs and adjectives too. So, you're using the word trust. I'm just pointing out for our listeners that in the computer science world in the AI world trust often means certainty that the data that I'm using is unperturbed from the point of creation to the point of usage. That doesn't necessarily mean that it's not wrong or that someone didn't lie. It means that it's been unperturbed as created. And that took a lot of words to say but knowing that I have a trusted solution doesn't necessarily mean that I can trust that the truth is there. That's a whole different conversation about the truthiness or the veracity of the decisions that we're making. Maybe I can ask you a quick question about that. When we go to court, we have to swear to tell the truth, the whole truth and nothing but the truth. And I've often thought about that. Why do they say all that? And I realized those are three different things, right? In AI, and certainly in cognitive AI, and certainly in anything convolutional, where the answer is part of the next question. So important that lies and untruths don't find their way into the analytic stack, because they will have children. Right? And it'll be really hard to suss that out later, and I won't get into it, but the urban dictionary and Watson comes to mind. Right? So how do you think about that there's lies out there, and we're gonna consume them into our AI?

Inderpal Bhandari 25:44
Yes, I mean, the model that you know, we find useful is if you think in terms of those three elements, data, models and processes, and so kind of think of that as one dimension. And then on the other side of the dimension, within the broader definition of trust, we have things like provenance, sourcing of data, fairness, robustness, bias, etc. And you want to essentially make sure that you are addressing that entire grid. And we'll obviously learn many more things as we go through with this. But that's how, that's been our start. And it's been an evolution. I mean, when we started out, it was all about, you know, got to make sure it's the provenance of the data that's right. Then we realize we’ve also got to also start worrying about bias. Now we're thinking you got to worry about robustness. And this is why we've kind of realized that there's a platform that's actually asking for creation. And that's kind of the opportunity. That’s what we’re after.

Anthony Scriffignano 26:37
There is also a relative nature, in some of these terms. What might be fair for me might not be fair to you. What might be good - I love the word good - every time I use it, I feel like I need to use air quotes, right? Not necessarily good for all right? So, the context of the application of these concepts is also so important. And I think that's why it's important to show your work, right? To show all - if you try to remove bias, by definition, you will be introducing some new type of bias toward the way you think it should be. You might want to think about that, right?

Inderpal Bhandari 27:10
Well, that's the expandability aspect of it, right? You've got to be able to explain what you've done. Then, layout the choices and that's a key part of that. That's one of the key elements of that other dimension that we spoke about, you have to have explainability.

Anthony Scriffignano 27:22
We saw a lot of that in the COVID statistics, when they first started to try to report things. It was far from perfect, the data was all over the place. And so, you really had to read the footnotes on what all of these measurements meant and what they implied. And I'm not gonna get into all the epidemiology, but when we're looking at the infection rate, is that the infection rate of the people we tested or the projected infection rate of the population? It’s so important to understand the meaning of all of that. We're seeing an increase in, as practitioners, not us, not in the world that you and I live in, but in the business world that we serve, and are part of, someone could make a valid accusation that the business decision makers of the world are becoming a little too reliant on data and technology. And we're losing the ability to think about the preconditions of what has to be true in order for me to use these numbers. Why would I believe this? Do you think that we're losing the ability to ask those good like epistemological questions, before we go, we'll use all this amazing data and technology to push the button and get the answer?

Inderpal Bhandari 28:23
You know, I mean, it's a very, very good point. Because if you think in terms of the intent, right, especially the way we've looked at it, augmented intelligence human on the moon, the whole idea is that is the human decision maker will essentially work with the AI system, and the AI system will be providing options, and the human will probe those options. And that's the point you're making. I mean, to what extent is the human decision maker actually prepared to probe the actions because they will just to accept it, then, you know, then you're right. I mean, they may as well not be in the loop.

Anthony Scriffignano 28:58
And then we're serving the AI. Right?

Inderpal Bhandari 28:59
Exactly. So, I think one of the things that goes hand in hand, with this notion of augmented intelligence, is the preparedness of the workforce to actually work with these AI systems. And we spend a lot of time and resources on reskilling, upskilling. Also, you know, we start out very early high school kids, because we appreciate that aspect that if you're not ready to work with these systems in a way that that you're actually applying some very critical thought to what's happening and going on, yeah, then it's back to square one.

Anthony Scriffignano 29:30
So, I think that that leads very nicely into a question I had about the future. We talked a lot about the past and the present, the skills that we need. I do a lot of advising in the academic setting and the question always comes up, what do we teach today that's going to be relevant in two or three years? Great question. Try two or three months sometimes, right? So, what kind of skills and it's not this or that, right? We can't abandon all the skills we already have. But what kind of skills do the CDOs, Chief Digital Officers, of any kind of the future need that we need to hone? What kinds of things do we need to get better at or learn how to do in order to survive in this in this future world?

Inderpal Bhandari 30:09
Yeah, I mean, that's the technical elements of that, that, you know, people have to get more familiar with certain technical aspects, so that they're comfortable working with these systems. And I think that's actually the easier part. The harder part is that whole critical thinking, synthesis, those aspects, and being able to question you know, in a way and move things forward in a way that because remember, these systems learn, as we learn, right? and so unless you're playing that role in a truly rigorous way, it becomes less than optimal. So, I think that's the harder piece, right, those softer skills of synthesis of critical thinking, questioning. And in fact, one could go even as far to say that some of those skills are actually found in the softer subject. I think as a company, we really embrace diversity in terms of our employee set and skill sets. And we kind of value people from being a global company, obviously, it makes it easier for us, but from all walks of life, everywhere, etc. And that's baked into our system, you know, baked into your evaluations and all that in terms of are you really striving to create a truly diverse, vibrant workforce, we come at it that way. But I think also there's going to be more needed, as we think hard about what's reskilling, retraining, educating the workforce, to stay abreast of these AI approaches, and actually be able to work with them for the points you raised, I mean, otherwise, you know, it's not going to be as far.

Anthony Scriffignano 31:35
For sure. So, I think that we should try to leave our listeners with maybe a piece of advice. You know, if I were going to take a breath and say, you know, one thing I would recommend is be humble, right? You can't do this yourself, to your point, the more inclusive we can be of other types of thinking and other perspectives, that's certainly important. Teaching is really important so that we share what we've learned with others. But we've also got to learn, otherwise we become slowly irrelevant, right? And it seems to me extremely important that we always question what do we have to believe to use these methods and these technologies? What kind of advice would you share with our listeners?

Inderpal Bhandari 32:13
Yeah, I think, you know, you summed it up really well, right, in terms of, at least at an individual level, what one should be thinking on. I mean, these AI systems are here to stay. I mean, they're not here to stay because there's any choice there, they have to be there. We have to use them to harness the explosion of data to deal with all that stuff. Otherwise, you know, we can't keep pace with them. So, they're there. They're going to stay. And what's the best way to work with that? And what's the best way to prepare ourselves to work with that? And I think it's more of an augmented intelligence setup. And so, what does a person who's working with an AI system need for preparedness and the easy stuff is the technical pieces are not that it's necessarily easy to go off and learn all that stuff. But, you know, if somebody is working in supply chain, they don't have to know everything about the technology, about the AI system that's making recommendations, but it's helpful to have some idea of what it's doing. But much more important is being able to look at the options that are being presented with a critical eye with an eye to understanding and improving the system. In that sense. It's just like any other piece that one would have in your decision-making sequence. And you know, you've got to be able to interact with that. So, you know, my one piece of advice would be, don't underestimate the importance of the softer skills here, of the critical thinking, the vestibular illogical thinking that you were talking about all those things I think are critically important. You can come at it from a diversity of workforce aspect, but as an individual, I mean, just appreciating that there's all that complexity, and you probably don't have all the answers, promote the learner mindset that we talked about.

Anthony Scriffignano 33:49
Well, certainly speaking of appreciation, I would like to express my appreciation for your intellectual generosity, and your always willingness to do this sort of collaboration. There are things we get to do and things we have to do, this is something we get to do. So, it has been my great good fortune to spend this time with you today. And I can't thank you enough.

Inderpal Bhandari 34:09
It's my pleasure Anthony as always. A pleasure to work with you, the pleasure to talk with you. I think these conversations are great. Thank you.