Bots and Beyond Episode 28: What Is Ethical AI?

Share:

Artificial intelligence has evolved at a breakneck pace in recent years and has made many positive impacts in both our personal and professional lives. As AI solutions become increasingly powerful, concerns have been raised around the ethical challenges associated with this technology. How can companies avoid stumbling into some of those ethical pitfalls? What do we need to know about unconscious bias and the role it plays in the development of AI solutions?

These questions and more are discussed in this episode of Bots & Beyond as host Wayne Butterfield talks to ISG colleague, thought leader and culture and change leadership expert Missy Lawrence. Many of the topics Wayne and Missy discuss are also covered in Missy’s recent article in Information Week: I’m Not a Cat: The Human Side of Artificial Intelligence.

 


Transcript

Wayne Butterfield

Hello and welcome back to Bots & Beyond. This episode would you believe is my first one that I am recording live from our new house over in South Lake, Texas. And it's my pleasure to welcome this week's guest, a local to my newfound location, Missy.

So hi Missy!

Missy Lawrence-Johnston

Hi, happy to be here.

Wayne Butterfield

Brilliant. Great to have you on the show. I know this has been in the planning for so long. I was still in the U.K. when we were originally planning this. So, for everybody else's benefit Missy is a fellow colleague at ISG. She's one of our principal consultants. And for me, she's one of our thought leaders in the role of ethical AI. But I know that's not all that you do. So, what I'm going to do is, as is standard for our Bots and Beyond guests, I'm going to ask you to introduce yourself and let us know who you are, what you do, and why you do it.

Missy Lawrence-Johnston

Sure. Thanks, Wayne. So, Missy Lawrence-Johnston here. As you said, I love, I'm kind of a big nerd, I'm geeked out about AI. And constantly, there's so much to learn. But that's not my primary focus. My main focus for ISG is leading our Operating Models capability here in the Americas. And that’s operating models for digital in enterprise, non-IT functions. Operating models of all kinds. Looking at just the way you work, the way you do things around here. That's your operating model. And oftentimes culture, or what I call the C word because it's a scary word for a lot of people, is embedded in just the way you work or the way you do things. So, my passion in my soapbox, really, the why I do what I do is truly around people and behaviours, and neuroscience, and the psychology of workforce dynamics and workforce behaviour. So, I focus really on the human side of digital, whether that is the human side of cybersecurity or the human side of cloud migration, or just the human side of any type of digital transformation, moving from a traditional way of doing software development to a more IT business alignment or a product model. That's really what keeps me up at night in a good way. And that's what I'm geeked out about. It’s really just trying to help with the human side of all of the super savvy, super tech things that people like you are focusing on.

Wayne Butterfield

Great, great to hear. I love that you concentrate on the human side of all of these cool technologies. So, that that really makes a lot of sense. What were you doing before ISG then? Let's just get a bit of background. So, you’re geeking out now about all of this but was that something in your past career?

Missy Lawrence-Johnston

You know, the funny thing is I moved from, not as far as you, to Texas, to South Lake. I was supporting a chief product and technology officer for a company that builds technology for the travel and hospitality industry, out in South Lake. You may know exactly that company that I'm talking about, but that company actually relocated me to Dallas, or to South Lake area, from Minnesota, Minneapolis, Minnesota. So, prior to ISG I actually worked as a director of strategic enterprise transformation for a major health care company, and that company had spun off and commercialized their IT department. And prior to that I had three or four stints with different life sciences technology organizations. So, if you think of all of the major health care providers and life sciences work, their enterprise, internal IT departments, I was always the HR business partner supporting them, and HR for mergers and acquisitions. So, design or effectiveness, everything people related to organizational and structural changes, workforce changes that was driven by IT, whether it's divestitures, or joint ventures, or any type of organizational structure; my role was to come in and make sure that, I call it kind of squeezing out all of the yummy goodness of your human capital. So, making sure that you optimize that ROI by getting the best out of people. And prior to that, 15 years prior to the private sector, I was in federal government. So I worked specifically for the Department of Labor here in the US, and the Employment and Training Administration. So, people is in my blood.

Wayne Butterfield

Loads of interesting background there! Kind of public, private sector, obviously very focused on that people side even in the world of technology. I really want to get into kind of the meat of today's discussion around ethical AI. So, you know, I kind of came across you as a bit of a thought leader as you know, I was asked a few questions about this. And I really resonated some of the things that you were saying. I know you've recently wrote an article on this as well that we can kind of talk about a little bit later, but kind of give me your view: what is ethical AI? Let's start at that for the audience. And then let's go from there.

Missy Lawrence-Johnston

Yeah, I think that you know, anyone can go out and Google Ethical AI right or what is ethical AI? And you know, adhering to normative standards and guidelines. Some of the fundamental do's and don'ts of things such as individual rights, non-discrimination, non-manipulation, privacy. But to me, it's really developing AI. As we would think about test driven development, you're thinking about test driven development for AI, from an empathy perspective, right? So, you're going to kind of put on your user experience, or end user compute lens, and think about throughout the entire lifecycle of planning, building, developing, operating, monitoring of AI or ML; do you have the right amount of diversity, equity and inclusion naturally built in organically into the people who are developing and doing the work?

Wayne Butterfield

I think that's a brilliant explanation, so I really like that. Where does bias come into it then because we hear a lot about ethical. You hear about bias. Yeah. Talk to us a little bit more about bias then.

Missy Lawrence-Johnston

So, do you have humans involved in anything? If the answer is yes, we are nothing but walking biases, right? We're nothing but basically, cucumbers. We're over 90% water, but our brains create predictions for us to keep us safe. So, all we do, billion times a minute, is think about and make judgments. And that's how we've evolved and that's what makes us smart, right? That's what makes us savvy. That's what makes us true. That's what helps us to create and to innovate. It’s by leveraging the predictions that we do internally without ever thinking about it. That's what makes us humans. Our amygdala, so to get into the neuroscience a little bit, is our fight, or flight, or freeze part of our brain. And that amygdala is the oldest part of your brain. So, Wayne, as a child, you’re baby Wayne, all you know how to do when you don't like something is cry, or scream. I need to let someone know I don't like something. As we evolved, and I know you've got young children so you’re probably thinking about this with them too. As we evolve, our brains develop, and our prefrontal cortex or our PFC starts to keep that amygdala in check. And the PFC basically is keeping a track record of this will harm me, this will not harm me, this is scary, this is not scary. If I see this, this means that if you look like this, this means that if you sound like this, this means that right? That's the job of a fully functioning and a fully developed brain. So, just to be a human is to actually have this intelligence to keep your amygdala in check. And if you are a human, that means you make judgments, you're going to make prejudgments, you're going to have prejudices. And that's how I distil down bias. There's a lot of bias that is unconscious, but it's not malicious. There's no malintent. And then of course, we have our own biases. Specifically, the things that we know we do or don't like, for things that we want to do or not do, and that's where unconscious bias comes into ethical AI. Are you aware of the unconscious or the implicit bias? Do you have mechanisms in place to make it explicit and ensure that it is not adversely impacting how you develop and deploy?

Wayne Butterfield

This unconscious bias is just, it's unconscious? Right? It just happens. How on earth does an enterprise, how on earth does a person combat that? Nevermind a business which is made up of multiple, multiple, multiple unconscious biases. How do you even try and get around this?

Missy Lawrence-Johnston

That's the right question. It's not about the enterprise as a whole. As we talk about the article later, the enterprise as a whole will focus on testing process, data governance, how you recruit, how you retain; that that's what the enterprise will do. But it's really herd immunity. And you have to start with the individual of mine. It is completely individualistic. It's very specific. It's case by case. And the number one thing not to oversimplify it, it’s so simple and so easy, Wayne, but it's the hardest thing to do; and it's just to be introspective. It’s to take a look at yourself, and to think about some of the things that you may make an assumption around. As an example, we were recently talking about conversational AI, I think it was last week, right? We're pinging back and forth. If you're thinking of things specifically as your conversational agents, or your chat bots, or whatever you want to call them, and you're thinking of NLP, and just language and how triggering language can be. You think of a word like, you're going to book a flight as an example, or you're going to make a reservation for a flight. Those are two very similar things, but cultural implications in different dialects, based on your socioeconomic experiences, based on just your worldview. You have four or five options when you get online with that CA right? And you can't process what you want to do. I want to book a flight. The options are “would you like to make a reservation?” Or just small little examples as this. How do you go all the way back to, as you're developing, as you're processing, as you're inputting data as, you're training; how do you make sure that you're taking things that are probably assumptions, and taking those assumptions, and throwing them at the wall and saying, is that actually the best way to articulate that? That's one small example of language, right? So, you have to just keep it on your prefrontal cortex. Right? Not just that fight, or flight, or freeze. It requires what I call a stamina, a mental stamina. You can't be lazy with your cognition. You have to constantly try to take those things that are assumptions that feel implicitly universal, and make them explicit, and then try to deconstruct it as much as possible in everything that you're doing. Whether it's recruiting, whether it's working with colleagues, whether it's the actual data that you're feeding, you can never be too sure and it's hard to do on your own. So as an individual, I then have to take a second step. So, I have to look inward, I have to try to make those things that are assumptions explicit, and poke holes at them. Then I actually have to have accountability. So, I need Wayne to come in and say; is that the best way to approach this Missy? We have two different walks of life. You are married with children. I'm married too. You're a male. I'm a female. I'm a minority. You're in the majority. You're from the UK. I'm from Philly, right? You need this bout of diversity of thought, and diversity of experiences, to be able to really break things down. And I think that that's where you get that collective herd immunity, if every single person is doing that, and everyone's combing the horizon and looking outward for that accountability, and they're okay with that accountability, then that's where the beauty lies. That's how you move the needle.

Wayne Butterfield

It feels though, like there's a lot there sat on the shoulders of anyone, and everyone involved in ML. There will always be things, as a result, fall through the cracks. Is there a role for technology to assist us in this space? Or are we really going to have to rely on people doing the right thing?

Missy Lawrence-Johnston

You know, I think there are arguments on both sides of the fence, fairly. I think there are some who would say that you absolutely can use the tool to figure out the problems with using the tools. That's a great idea. You should absolutely do that. But the tool can still only do so much. It's, you know, that argument around intuition, and that gut check in your network and getting closer, obviously we will over time. But this is the new ultra-senior leaders’ imperative; is to drive the culture change of the enterprise and the culture change of the individuals. That means their values, that means their habits, their behaviours, the mindsets, the outlook, the experiences, the beliefs we hold, all of those things that are not very hard or tactile. It’s still going to go back to whether it's the chicken or the egg, the individual. The tool doesn't build itself. So, you still are relying foundationally on the person. And that's not the challenge, Wayne, in my mind. That is that's the opportunity.

Wayne Butterfield

Interesting. I like that. I think the more and more we start to rely on ML, and the wider teams grow, just that bias problem feels like it explodes, it gets bigger, and bigger, and bigger. And it’s at a point now where we really do need to start addressing this. And it’s probably the reason, one of the reasons why, ethical AI and the discussion around bias is so front and center for so many people. You can't fail to have missed how frequently this is now being discussed in the in the public light. I know that one of the things that I alluded to earlier was around one of your recent articles on this subject. So, why don't you give the audience 30 seconds on what that's about. And obviously I will then ensure that it's available in the show notes. So, anybody who's interested in reading can get access to that really easy.

Missy Lawrence-Johnston

Absolutely. The editors at Information Week actually asked, “you know, Missy, we're very curious about this, this human side of digital. What are your thoughts on the human side of artificial intelligence?  And specifically, around bias and unconscious bias?” I talk a lot about cognitive dissonance and behaviour dynamics and helping leaders to understand themselves in the individuals that they are leading, and teaching them coaching. So, the article; the title always makes me chuckle. The title is I'm not a Cat: The Human Side of Artificial Intelligence. And without giving too much of a spoiler alert, I'll let the audience go out and get the conversation going, but you must Google - you probably all know it, this is your crowd - the chihuahua versus the muffin. How's that for a teaser or a hook?

Wayne Butterfield

I love that. And as a Silicon Valley fan, I like the other one, which is hot dog / not hotdog. Very similar to the chihuahua / muffin.  So, Missy, this has been absolutely amazing. I'm so pleased that we that we got to do this almost as neighbours. We should really have just done this side by side.

Missy Lawrence-Johnston

We probably should have! I thought about that once we got started.

Wayne Butterfield

So maybe next time but thank you ever so much for your time today. Really appreciate it. I'm sure the audience will really take value from our discussion today. And of course, if anyone is interested in hearing more from Missy then feel free to reach out on LinkedIn. I'm sure she'll be pleased to hear from you.

Missy Lawrence-Johnston

Thank you so much.

Wayne Butterfield

My pleasure. And thank you. Thank you for listening also, and we'll see you shortly on Bots & Beyond.

Share:

Meet the host

Wayne Butterfield

Wayne Butterfield

Wayne is an automation pioneer, initially starting out as an early adopter of RPA in 2010, creating one of the first Enterprise scale RPA operations. His early setbacks at Telefonica UK, led to many of the best practices now instilled across RPA centres of excellence around the globe. Customer centric at heart, Wayne also specialises in Customer Service Transformation, and has been helping brands in becoming more Digitally focused for their customers. Wayne is an expert in Online Chat, Social Media and Online Communities, meaning he is perfectly placed to help take advantage of Chat Bots & Virtual Assistants. More recently Wayne has concentrate on Cognitive & AI automation, where he leads the European AI Automation practice, helping brands take advantage of this new wave of automation capability.