Debiasing AI: Training Machines for Good with Jacob Ward and Kathy Baxter
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Michael Rivo: Welcome back to Blazing Trails. I'm your host, Michael Rivo from Salesforce Studios. Artificial intelligence is all around us, shaping our world and experience often undetected. And examples of inherent biases and machine learning are more and more apparent. How can we proactively think about planning and directing this useful tool for good? That's the question I want to explore today with my guests, Jacob Ward science and technology journalist and author of The Loop. How technology is creating a world without choices and how to fight back. And Kathy Baxter principal architect of Ethical AI Practice at Salesforce. Great to have you, Jake.
Jacob Ward: Thanks so much for having me.
Michael Rivo: And Kathy, welcome to the show.
Kathy Baxter: Thank you so much for having me.
Michael Rivo: Jake, let's start with you. You've been a tech reporter for many years, and you most recently written about the dangers and opportunities with AI in The Loop. What question did you set out to answer with the book?
Jacob Ward: What I was basically looking at was will we by deploying AI on one another inadvertently wind up amplifying ancient circuitry that we have spent really all of modern humanity trying to get away from and compensate for. And fundamentally my question was, are we going to do to our ability to make important choices as individuals? And as a society, are we going to do to that ability with AI, what Google Maps has done to our ability to find our way around? That was fundamentally my question here.
Michael Rivo: Well, that's a perfect segue to introduce you, Kathy, because you're from the inside, from the Salesforce perspective and from the software industry, looking at that same issue from an AI ethics perspective. So tell me a little bit about your role at Salesforce and what you do.
Kathy Baxter: My official title is principal architect of ethical AI. And my team and I, we are part of the Office of Ethical and Humane Use. And we are tasked with working with our employees to understand how do we ensure that what we are building is responsible? That we are building technology that customers and society can trust that we are bringing better things into the world and not creating unintended consequences that we then have to work to put that genie back into the bottle. We also work with our customers to help them implement our technology responsibly because we are just a platform and our customers can use our general purpose AI tools in many different ways. And so how do we give them the tools and the knowledge to be able to use that responsibly in the world?
Michael Rivo: Jacob, in The Loop, you write about how AI is changing the way we think, it's really impacting our cognitive processes. Tell me a little bit more about that.
Jacob Ward: Well, we've seen it, certainly on a day to day basis in any form of the attention economy. Anytime you open YouTube or Spotify or anything else that's trying to grab your attention and not let you go. You're seeing the results of a piece of machine learning typically that is looking at past patterns and people who are like you and trying to predict what you will like next based on what they have liked next. And then they feed you up the result. So we see that right now on a day to day basis, and that feels to me like the modern incarnation. Now that's the current loop. The future loop that I'm starting to think about and worry about, has to do with what happens when we feed those same sorts of technologies and the same profit motives into much bigger, more fundamental human things? We're seeing places now where, divorced parents are being mediated between by automated systems, because it turns out that human conflict, the arguments we get into with our exes over the care of a child are in many ways, just as predictable as our taste in what video or playlist we might enjoy next. And so the same sorts of systems, off the shelf that are typically picked up and thrown at today's sort of a simple attention problems. Problems is not the right word, but making money off people's attention are being thrown more and more at other things. And I worry that we're going to get to a place where, it's really not even going to be possible inside an institution, not to use those tools because the expectation is that you're going to be taking advantage of the incredible efficiency of them at all times. And that we will over time, it won't even occur to us, not to rely on an automated system and go with its recommendations because we will have convinced ourselves. And it turns out there's a huge body of psychology that shows why we would tend to do this. We convince ourselves that a system we don't understand knows the answer better than we do. And so those factors I think, are going to contribute to this future loop that I'm trying to articulate.
Michael Rivo: And Kathy, how does this play out in the software development process? I think, so many people that are involved with creating these tools are aware of it and becoming more aware of it now, but what are those conversations like? How should the folks who are making this technology be thinking about it?
Kathy Baxter: I think as Jacob was speaking, one of the concepts that jumped into my mind was of moral de skilling. The more and more tasks we hand over to AI and say," You're magically better at this decision making than we are." Humans are these terrible biased creatures. AI is this neutral, magical machine, which we know that it's not. And we just hand more and more of these decisions, not just about what's the next best movie we should watch? But, should somebody be granted parole early? Or which areas of a neighborhood should there be additional policing applied to it? More and more of these decisions get handed over to AI. And so one of our roles is to ask that question of," Should this even exist in the first place?" What are our red lines? For example, we decided from the very beginning, we would not do facial recognition. That was a red line we've drawn. And there are other places where we decide that we are not going to allow our customers to use our AI in certain ways, either because our AI was not designed like purpose focused for that use case. And so we know that it's probably not going to be as accurate as it should be. Or for ethical reasons. We just feel like this is not something that we want AI applied to. And so we either explicitly don't create technology for those purposes or we put in our acceptable use policy that we don't allow customers to use our product for those purposes.
Michael Rivo: And I'm curious what you've seen in terms of changes over the years with how companies are thinking about the use of AI. Are you seeing more awareness around this now?
Kathy Baxter: Many companies are starting to create teams like mine. And those that have had teams for quite a long time, many of the more experienced companies in AI that have had these teams for a long period of time, they are starting to pull away from some technology. For example, IBM stepped out of facial recognition. They actually have also stepped away from healthcare AI, that wasn't working very well for them. And so they decided that was an area that they were going to divest from. I think more and more companies are either stepping out of certain areas or they're adding more safeguards. They're spending much more time and resources into acquiring representative data sets and working on bias mitigation and harms mitigation.
Michael Rivo: Jake, it makes me think about an idea you write about in the book, which is, you call it a world without choices, which is pretty powerful. What do you mean by that?
Jacob Ward: Well, I think it has to do with both at an institutional level and at an individual level. We have already gotten to a place where, when Google Maps suggest to me this route or this route, it doesn't occur to me to go any other way. And so we've all experienced that on a sort of individual basis. But I think at an institutional level, we're seeing it get into a place where personal, in agency is being diminished. As Kathy mentioned, the moral de- skilling of humans, it's not just a sort of accidental side effect of some of these systems. It is in many ways a feature. It's considered a way of absolving people of difficult work. Either because it's tedious, or in some cases, because companies actually would rather rely on an automated system to make a difficult decision than on the, off the cuff gut instinct of somebody at the front line. And you talk to some organizational psychologists and they say," If I'm going to replace a hiring manager with software, maybe that's better if that hiring manager is racist or ageist or anything like that. We can make up for those biases through a system like this." That may be true. But what about a system that inadvertently picks up the general systemic inequality of society, regurgitates it as what looks like a very neutral and sophisticated judgment and that 24 year old, junior HR representative isn't even empowered to override that system, has to go with its suggestion. And there are companies that are making loans and hiring decisions, an entirely automated process. That's the promise. Where we're promising to do away with human inefficiency and save money in the process. I think one really powerful thing that Kathy just said there that I think is worth dwelling on is, internally a company like Salesforce has decided," Okay, we have these bright red lines we're not going to cross." And I'm so grateful that there are companies that have that perspective. But I have talked to many, many companies that do not have that perspective. Where you ask people who make software that is just as invasive and just as fraught as something like facial recognition. I've asked them many times. That's one of my standard questions when I talk to entrepreneurs is," You have invented a piece of technology that is going to require more sophisticated ethics. Have you also invented the ethics to go along with that technology?" And as often is not, the answer is," That's not my job. It is the job of elected officials to regulate that. It's the job of somebody else. I'm just an engineer this is what I make." There are also companies that actively dissuade the people that work at them from even inquiring as to what the technology they're making is going to be used for. That's a fireable offense in some organizations. For me, I think one of the backstops that I see coming down the road and I'm hopeful about it, I mean, I wish I had more faith in the power of companies simply establishing their own rules or there being some kind of charter that companies voluntarily agree to. I want that to be the case. But I think that ultimately given a choice between shareholder value and this technology that shareholder value almost always wins out. And now that we're getting to that place, huge numbers of fortune 500 companies, aren't even in a position to explain how the systems work. Polling has shown that some of the top CTOs of those companies, in one poll 70% of them said, they didn't even really care how it worked. They just wanted it to work. That's the promise of AI. You don't have to think about it. But now I think liability law is going to make it such that people are going to have to think about it. And for me, in cases where unlike Salesforce, a company doesn't have bright red lines, I think the law is going to start astonishing some as we go forward.
Michael Rivo: Kathy, what's your take on that in terms of regulation, where we are now voluntary versus government. There's so many different jurisdictions around the world, et cetera. What's the landscape right now around that?
Kathy Baxter: We have our eyes very much as I think most multinational companies do on the draft EU AI Act. It is going to be as monumental as GDPR has been in terms of data collection and data privacy. And so we're very much watching it. And companies, once it's finalized, will have a couple of years to come into compliance. And any changes to that regulation, it is also going to be a monumental task to iterate on it. So I think in the U. S, we're seeing more point solution regulations. Individual cities or states coming up with very specific regulations, forbidding facial recognition. For example, or regulations around how AI can be used in hiring decisions. It makes it difficult for a company to be able to comply with all of these different regulations, but those can be created much more quickly and they can be iterated on much more quickly. I think within the U. S, that's probably where we're going to see the majority of the movement, is these individual state level laws or regulations rather than federal or multinational, like the EU AI Act.
Michael Rivo: And for so many of us, we are using... We're end users. As people working in business, we are end users of these systems. And many of us probably, as you say, are not aware of how they really work. What are some tips you have or some things we should be aware of as we're going through our day to day work lives so we can start to understand when these systems are impacting decisions that we're making, what is the data that we're seeing, et cetera? How should we be you looking at this?
Kathy Baxter: I think one of the big challenges, and again, I think Jake, you touched on this is how do individuals even know when AI is at play? And what is the right to contestability? There is a piece in GDPR that says that individuals who believe they've been harmed by an AI system, have the right to redress in remediation. Well, first you have to know that you've been harmed. And so many people, they don't even know if they haven't seen a job ad because of their race or gender. They don't know if they did see that ad, why they were never called back for an interview. And so I think this is a challenge for us as individuals. First and foremost, if you are a tech employee being very mindful about who you work for, who you give your brain power to, and if your ethics do not match the ethics of that company, regardless of what the salary is, and it's very easy for me in a privileged position to make this statement, but work for a company that you can feel proud is providing better things into the world than what it may be extracting from it. And then as consumers making very informed decisions about who you give your attention, your eyeballs, your data to, because only then, companies can't survive without your data.
Michael Rivo: It's interesting, I was thinking about that as a... You work in an organization and you get to hire somebody, you're excited and you create the job description and you upload that to HR and income the candidates, maybe you should be asking," Hey, how are we filtering these resumes? What systems are we using?" And if you extrapolate that out into many different questions that you can ask, it would be interesting to see what those answers are. Maybe Jake, you can talk about that too as you've looked at this as a reporter.
Jacob Ward: It is very, very difficult for individuals, certainly at the user level to know anything as Kathy says about what is being played on or not. In some cases, even people I've spoken to whose entire lives have been shaped by recommendation systems that are feeding up gambling apps. And really addictive products, the people who blow their entire life savings on those products, don't even understand that they have been targeted on Facebook based on past behavior and how it's parallel to other people's behavior. So to me, it has always been one of the great frustrations of the talking points of companies that say," It should just be a wide open world and we don't have any real responsibility for this stuff." The sort of libertarian attitude that a lot of people have about this stuff really puts it on individuals to somehow be people often say, people need to educate themselves or take responsibility for themselves. I've heard phrases like intractable, be intractable. That kind of stuff, makes me crazy. Because all the market forces in the world are working against that. I'm extremely encouraged by something like what Kathy is saying there about how the people who make this stuff, those prized brains that are being hired in, if those folks begin to say," You know what? Maybe it's part of my job to ask how this is going to be used. Maybe it's part of my job to really understand it and raise my hand when I object to it and threaten to go elsewhere." That kind of stuff. That's super powerful. Really, we could use more of that I think, because I meet a lot of mid- level people who think it's either their place or somehow immoral for them to impose their own worldview on stuff. That is complicated. But I think what one thing that AI is really creating for all of us, I really think that AI has created a new chapter in how we think about what we deploy on one another. If you're deploying a system that's supposed to make us the best version of who we are, or is supposed to help us make moral decisions, or whatever the thing is, we have to decide what those moral decisions are. In this country, we can't even decide whether somebody addicted to gambling is in control of themselves or not. That is still being argued on a daily basis. Our understanding of ourselves and our values is all about to be adjudicated through this stuff, because the technology allows us to program it in to the code. It allows us to pre decide that stuff. And so you have this very small, very powerful set of thinkers working inside the companies, deploying these systems, essentially making those decisions. And thank God when it's Kathy. Thank God when it's Kathy and her team, but there's not a Kathy at every company you guys, you know what I mean? And so I think we're in the beginning of a very complicated phase of human history that we have to start get down in front of.
Michael Rivo: Kathy for companies that don't have someone in a role like yours, how do you suggest that they build AI awareness into their organizations?
Kathy Baxter: This is definitely a challenge. How do you turn that into processes or practices in the company? How do you think about building ethical reviews into your product development life cycle? All of these things build over time. And so I've published this ethical AI maturity model and then validated it with my peers that have had teams like mine at their companies and my coworker inaudible Lesinger found that Patrick Hudson, this international safety expert from Australia. He created this safety maturity model. And that was absolutely amazing because it maps right along with our ethical AI maturity model, that to build a company of responsible processes, whether it's security or physical safety or responsible AI, it follows the same path of how you need to develop this muscle. No matter how well intentioned the executives at a company might be, you can't create an ethical culture or a safe culture overnight. It takes time and incentives and processes to do it.
Michael Rivo: And so where can I find the documentation? Is this in Trailhead or where do I get it?
Kathy Baxter: Yeah, we've got this great little shrink link sfdc. co/ EAIMM that people can go to and check that out. And we also have some blogs and other resources that are available. I'm really proud at Salesforce we also have Trailhead on the responsible creation of AI. We have a lot of resources that we've been sharing because, we don't see this as a competitive advantage. We really want all of the technology we're using to be ethical and safe.
Michael Rivo: Jake, it can't be all doom and gloom out there. What glimmers of hope do you see for us out there?
Jacob Ward: Well, I want to be clear. I think AI does have an extraordinary capacity to do amazing things. You look at something as simple as trying to figure out whether a mole on somebody's arm is going to turn into cancer or not. AI is vastly better than even the best trained human technicians at spotting which mole is precancerous and which is not. So we've seen time and again, places that is fantastic. For me, it is the places where we have decided to deploy AI, not for profit, that I am so blown away by. You've got people making AI systems that can try and fill in the gaps in our archeological record to figure out what did people make between this piece of Etruscan pottery and this piece of Egyptian pottery, these incredible uses of that. If you were to give even the simplest off the shelf gun to a social services agency and they got to use it to match unhoused people with the social services available to them, it could be incredible. I think there are amazing ways to use this stuff. Anyway, I take comfort from something that a guy said to me once, who was one of the architects with the Good Friday Accords, the Belfast Agreements that ended the troubles in Northern Ireland. This guy, Lord John Alderdice, who I mentioned in the book. And he points out that it took him years of negotiation to sort out where everybody in those Accords was going to sit at the table, literally sit. And for me, I look at that and I think, oh right. It takes a while to figure this stuff out. And unfortunately technology moves fast, profit moves fast. We got to move fast, but we can figure this stuff out. The three of us are not sitting here smoking cigarettes right now because we figured some stuff out. It took a lot of court action to make that happen. But we can do it you guys. For me, I take some comfort from it as long as I keep my time scales in mind.
Michael Rivo: Okay. Well, Jake, that was super interesting. Thank you so much for joining us today.
Jacob Ward: Michael, Kathy, thank you so much for having me, really appreciate it.
Kathy Baxter: Thank you so much for having me. This is an absolute joy.
Michael Rivo: That was Jacob Ward, science and technology journalist, and author of The Loop. How technology is creating a world without choices and how to fight back. And Kathy Baxter, principal architect of Ethical AI Practice at Salesforce. Thanks for listening today and if you like this episode of Blazing Trails, be sure to subscribe wherever you get your podcast. I'm Michael Rivo from Salesforce Studios.
DESCRIPTION
Artificial Intelligence is all around us, shaping our world and experience, sometimes undetected. Inherent biases in machine learning have become apparent. Now is the time for proactive thinking, planning, and directing this useful tool for the good of society, rather than for the bad.
Author Jacob Ward along with Kathy Baxter, Principal Architect of Ethical AI Practice at Salesforce, join Michael Rivo in this critical conversation around debiasing AI and ensuring it has a positive impact on our lives and world.
Mentions:
- The Loop: How Technology Is Creating a World Without Choices and How to Fight Back
- Salesforce AI Ethics Model
- Benioff Ocean Initiative
- OECD AI Principles



