Humanizing Marketing: Episode 5 - Ethical Marketing with the World Economic Forum

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Humanizing Marketing: Episode 5 - Ethical Marketing with the World Economic Forum. The summary for this episode is: <p>We won't fulfill the promise of artificial intelligence and machine learning without a robust discussion about the ethics of it all. As marketers start to use AI tools day-to-day, it's essential to understand the origin of bias in AI and machine learning models and how to mitigate the harm to your customers and to your business. In this episode, hosts Tina Rozul and Marty Kihn chat with Sheila Warren of the World Economic Forum. Sheila pinpoints the sources of bias and how organizations can implement checks and balances to ensure inclusive, authentic marketing.</p><p><em>Presented in partnership with WordPressVIP</em></p>

Speaker 1: (singing).

Tina Rozul: Hi everyone welcome back to the Marketing Cloudcast, a podcast by Salesforce created to bring together marketers and people who want to learn more about how to be better in their companies and careers. I'm Tina Rozul coming to you from Sydney Australia here with my wonderful friend and cohost Marty Kihn.

Marty Kihn: Hello everybody and hello Tina. It's amazing to think. We are now on our 5th episode in this humanizing marketing series. And today's topic is really relevant to what I've been seeing here in my home base in the US on the East Coast and also in the rest of the world as I look beyond our borders. We're talking about ethics and the ethics of using data and AI. It's a very relevant followup to what we talked about last week Tina, which was how marketers used AI and machine learning in their work.

Tina Rozul: Yeah, we have a very special guest on today's show sheila Warren, she's the deputy head of the Center for the Fourth Industrial Revolution at the World Economic Forum so a very important job. Sheila works with companies and governments to pilot new technologies that intersect with AI, crypto, smart devices and ensures there is the right level of ethical governance in place. She's also on the advisory board for DataEthics4All an organization focused on the ethical use of data and inclusive artificial intelligence, and is also a board member for the World Development Report, which is published by the World Bank, focusing on how data can benefit impoverished populations.

Marty Kihn: Could Sheila, be any more impressive, I don't think so. Anyway, it's a really good topic. As we focus on humanizing marketing that's the theme of the series and empathy and ethics are really an important part of being human. Provided of course you're the right kind of human. It will be also be interesting to hear from Sheila what the Fourth Industrial Revolution is. What is going on there? And is there a Fifth Revolution around the corner on Zoom, can't wait to find out.

Speaker 1: Yeah, me too. It should be very interesting. And we're really looking forward to speaking with Sheila, but before we invite her to the show let's hear from Nick Gernert CEO of our incredible partner for this series, WordPress VIP.

Nick Gernert: Simplicity is a critical capability because it opens up it democratizes who has access to the technology, who has access to the tools. And in marketing now the more we're able to distribute who has customer touch points, honestly, the richer the customer experience gets through a lot of this. So we think this simplicity is really critical capability that might be often overlooked.

Tina Rozul: Thanks, Nick and WordPress VIP. Now let's welcome to the show. Sheila, you have a very impressive background, as we said, you're the head of the center for the Fourth Industrial Revolution. What exactly does that mean? And what does working at the World Economic Forum feel like?

Sheila Warren: Yeah I have quite a mysterious title indeed. Indeed what is this Center for the Fourth Industrial Revolution, what is the 4IR to begin with? Maybe I'll just start by giving you an overview of me and who I am and how I got to be at this magical place. I am a lawyer by training. So I spent some time on Wall Street, moved out here to Silicon valley. I'm in San Francisco at the moment, but I moved out to Silicon Valley to focus on philanthropy law and thinking a lot about charitable giving and the law there. And I wound up unsurprisingly working with a bunch of people who made a lot of money in technology. And so I got pretty well versed in how technologists think, because the way they think about giving away their money, it's very similar to how they think about building product. And that got very interesting from there I built a SaaS product called NGOsource, and that was focused on international giving as well. From there, I became general counsel of a nonprofit called TechSoup, and that's where I got really into blockchain. So I started thinking a lot about GDPR, but how a nonprofit like TechSoup was going to actually deal with GDPR. We weren't heavily resourced, we weren't sure what we were going to be doing. And blockchain seemed like a potential solution to that. And then the forum in parallel during my journey, its 50 year history was really exploring the intersections of technologies like AI, machine learning, data policy and data science, analytics, IoT and smart devices, smart homes, blockchain and crypto. And Klaus Schwab the founder of the forum realized that there was this revolution in the making and he called it the Fourth Industrial Revolution, he wrote a book about this in about 2016 and the forum opened up an office in San Francisco in the Presidio to focus on these technologies. And so what we think a lot about how these technologies are shaping society and what's the governance that needs... What is the accompanying policy. So unlike many of the spaces that we work in, we don't build product. We don't test or pilot, we don't code out of our office, we really build out and pilot policy interventions that can help accelerate the benefits and mitigate the risks of these new technologies.

Tina Rozul: You're right Sheila, I think that's so important when we use technology. We by default think of the benefit of its speed, its capabilities. And often we can miss a step in how the impact of using the combination of data technology and AI is used ethically. What should we be more mindful of when intersecting the use of data technology and artificial intelligence?

Sheila Warren: Yeah. So I think the fundamental thing to understand is that there really isn't AI without data. So your first point of investigation needs to really be the data that you're using. What data are you gathering? What are you using it for? How you're using it. There may or may not be machine learning or an AI that's developed based on that data, but every company in the world, every government, every entity is using data in some form, whether they're collecting it, buying it, sharing it, analyzing it and using the analytics around it. There's some engagement you're doing with data. And a lot of the laws that we are familiar with many of us, GDPR, CCPA, even CAN- SPAM, a lot of these laws are focused around what you can and can't do with data. Whose data can you collect? What do you have to tell them when you get it? And that's one orientations it's sort of rooted in privacy, but there's a lot more going on with data than privacy. Privacy is one layer and there are different notions culturally around the world around what should be private. What does privacy mean? Who should have access? What should they use it for? What kind of notice do we have to have? All of these things are complicated and so you can't separate the cultural component out of this. But when you imagine that all of that data is what is going into the creation, machine learning is building on that and then you're creating an algorithm. Well, there's some things that can flow from that. And we've certainly learned that lesson the hard way over the last decade or so.

Marty Kihn: You make a good point. So they have an AI model an algorithm, which is amoral essentially, but it's based on data that has been fed to it. And that data could be text, it could be images, it could be unstructured. It could be something that no one's vetted previously and that data itself could have and often will have some kind of unconscious or conscious bias within it. How can marketers think about bearing that tension in mind?

Sheila Warren: Yeah. It's a real challenge because you have to be tightly connected to your data analytics, your data scientists, whoever it is. A lot of times it's not necessarily someone who's using insight from data that has the ability to effect what data is being used. But it's important to think about that. And so what we like to really focus on is looking at your company level, what are the systems and processes you have in place to make sure that you're thinking about these considerations and do you have an internal, almost checks and balances QA process that's parallel, that's thinking about your collection of data. Are you engaging in appropriate data minimization practices, meaning don't collect what you're not using. Don't collect it period is kind of a best practice. Do you have almost this accountability mechanism built in so you catch things before they're problems. Some companies that are very, very data heavy, or have a lot of public scrutiny will use a third party auditor even, hire someone to come in and kind of audit what they're doing with their data. There's a lot of considerations around how do you create almost a catchall? You want to stop problems before they occur. There's a lot of examples this going wrong, in particularly marketing campaigns. Well intentioned gathering data, learning things about people that maybe they don't really want you to know, and then marketing those characteristics, which cause problems.

Marty Kihn: For a marketer where do those processes sit? Is that in the marketing department or is it best practice to put it somewhere else as oversight.

Sheila Warren: It varies, a lot and it does need to start in your data division and that just varies. You might have a chief data officer, it might be a chief digital officer, innovation officer, there's a lot of these different places that it sits. There's usually within large companies some sort of data science client division. And so I think depending on what company, what your company is like, what your structure is like, these really are executive level C- suite decisions. They have to go all the way up to the top to decide what are we doing with our data? It's not a grassroots decision in some ways. You can feed up the insight and say, these are problems we're anticipating wherever it might be, but it has to be part of your overall business strategy. And so a lot of times I actually think that the overall marketing goals are going to drive what you collect and what you ask. And so without that information, you're not creating a data strategy. That's going to be aligned with what your vision and your needs really are. So it needs to be a joint effort, now where it sits that varies widely. So just, it really depends on the structure of your company, but I would say it needs to be something that there's input into from across the house. And it's a very senior level decision with buy- in all the way to the top so you understand that you're baking in these requirements or processes, whatever they might wind up looking like and everyone feels accountable to them across the entire house.

Tina Rozul: And you raise a good point as marketers, people always look at what is the output meaning, who am I targeting? Am I reaching my ROI goals? Am I creating a personalized message that's going to hit the buyers to take action. And they forget about the checks and balances that should happen before that. How can we create more urgency for our listeners to build that culture and change that mindset? Because like you said, it will vary. How can organizations start to create that culture to hold themselves accountable?

Sheila Warren: Well, I hate to kind of go to the parade of horribles but I think there are a lot of examples, and unfortunately there's more every day of what it looks like when this goes bad. And so one example here was that machine learning models in the early days when they were created, when they would translate language tasks they would actually associate female names like mine or yours Tina with things like wedding, mother, baby. And male names would pick up words like work salary, success, ambition. And so it's really, really gender coded, gender biased. And the model wasn't making that up, it was just trained on texts that had those gender tropes in it. And so someone just thought they were putting in novel whatever it was. But what came out of that was this association of women with the home and men with the workplace, it was very striking.

Tina Rozul: Is there a way that we can hold people who are in positions accountable to that? I almost feel like there should be a level of ethics training before you give someone that power to create that code.

Sheila Warren: It isn't even so much that the code was flawed because it's that the data that trained the model was flawed. So that generally there's established and there's three categories of bias in AI. And so one is what I call it's called negative legacy it's basically the training data it's what's used to train. This is kind of the garbage in garbage out concept. You put in data that's got bias in it, well, surprise you get biases that are coming out of it. The problem here, one of the biggest problems is your system might have years of bias baked into it. So banks got into this trouble because there was histories of red lining. So they weren't giving people of color bank loans in certain communities, they just weren't getting them. So their customer data or their loan data was white men primarily. So when you train your AI model on that, you're like," Oh, we're just putting in our loan data, just throwing it in there." And so we're getting analytics on our loans and what was more likely to pay or whatever it is. But the model, the data itself is so biased and you don't even realize that because it's legacy data it's from before you were even there, you don't even know. And then your model is just baking all of that in. So it's not always easy to spot. It's not that the code or the model is wrong. Is it's being trained on a super biased premise, there's a faulty premise and then you have to notice that. So there are people who are trained in doing this. This is an emerging field, this is how you spot biased data, what you do about it. And there are ways to correct this to be clear. It's not that you have to go invent data. There are ways to get in there and say," Well, we need to collect a different set or add in other proxy variables or whatever." There's ways of mitigating it, but you can't solve what you don't see, forget it. So another kind of AI bias is called algorithmic prejudice. And these names vary slightly, but I'll give you the concepts. And so this is the idea that you may think your model is not biased. We're not looking at that race, that's not even a category, it's on a variable, whatever, but you are looking at zip code. So where I live zip code, very heavily Latin X community, two zip codes over super Asian. So it really varies. And so if you're using zip code, well surprise, that's highly correlated with race. So you're getting race even though your model is technically quote unquote blind to demographic data. Okay. So this, again, it's easier to spot this in some cases it's a little more known and what happens here is ordinarily it's you see the model spitting out this biased outputs and you realize," Oh shoot. I was using by mistake. I was using a proxy variable. I got to think of something else. I got to create something that's less correlated with the demographic characteristics I'm trying to guard against and find some other way to get data into the system and train the model." But again, if you don't spot it, you can't solve it. And the third kind of bias we see a lot is underestimation. And this is there's just not enough data in the model for it to be confident about some segments. So the classic example here is you've got a super male workforce and you're like I want to have an AI model that goes through and scans resumes and finds likely candidates. Well, that's probably going to be more confident about male candidates because there's just a lot more male employees that it's learning about.

Marty Kihn: Is it fair to say any situation where you're looking at data that involves human beings on some level is probably likely to have some kind of bias, or maybe put another way you should always be looking for the potential for bias in that data.

Sheila Warren: I think yes. And advice is going to vary depending where you are. So there's a certain kind of Western tech company bias that's very different if you're in India, it's different bias if you're in Seoul. So there's cultural bias that is baked into certainly like most legacy data sets that's kind of known. Now the good news is it's kind of known and there are plenty of models of how you mitigate it. Because there's been enough of this done over time there's a whole library of what are proxy variables you can use and how do you guard against... Even certain data sets that people that are publicly available actually have in them, here's how you actually mitigate some of the bias in this particular data set. So you don't have to reinvent the wheel all that often. You have to be aware of this as a default position, really go in and do the investigation on the data and understand what are we putting into this system and what is it going to give us? And can we rely on the AI and the data that went into it to give us accurate outputs for again, always contextualizing what are our goals? That's the place to start. What are you trying to get done and you start there, and you figure out what you need to do to get to where you need to be.

Marty Kihn: Is there any kind of data that isn't biased by the way?

Sheila Warren: Look, I'm going to be bit of a cynic here and I think we all bring our biases to the table. I'm an American and I try very hard to guard against having an overly American point of view on things, I would argue it's almost impossible to do that because I don't even understand the ways in which my mindset is bringing that Americanness and that experience of the world that I've had for my entire life to the table. And that the default assumptions that I make are not accurate when it comes to thinking globally, they might be very accurate for an American context or a standard or a context or a woman of color context in America or a mom or whatever it is. But I certainly I'm really frankly intellectually incapable of putting myself into a worldview that is truly unbiased. So I think this is something that this is part of the reason that we talk a lot about who is at the table, who is actually providing use cases who is actually providing information and data. My point being the more types of people you have in an ecosystem weighing into it on a team, whatever it might be, the more likely you are to spot bias before it's out there in the world and you're getting in trouble. So it's a smart thing to think about for variety of reasons.

Tina Rozul: How can marketers ensure that the data that they're actually looking at is not too biased. What's that balance, what's that good mix of making sure that their leadership team has the right amount of checks and balances in place, but the data that they're actually looking at is that data diverse data.

Sheila Warren: Yeah. And I think we have to also contextualize here. So a lot of marketers there's two ways that you're conducting marketing, so one is you're looking at established customers where your established customers are giving you data. They are your customers, that data is not inaccurate. And your customer set might be a very narrow customer set, but that's actually okay because you're targeting that customer set. So the fact that your customer data shouldn't be used for another company is actually irrelevant because it's not your intention. You're like," This is who we market to. This is who we see. This is who we work with. They give us information. What we have on them is what they give us." So the bias is a little bit less of a concern in that context. Now, the other thing of course marketers do is they look for new audience and that's where you have to be a little careful about the bias, because you don't want to extrapolate from your existing customer base and say," Well, everyone is like that. Therefore I'm going to go at..." And not that any of these decent marketing would ever do this, but take this set and say," This is monolithic, everyone is like our existing customers therefore we're going to market the exact same way." This is what target marketing is about. But you have to be careful when you're going out, because oftentimes what you're doing is you're pulling data sets from other places so it's not your collected data. And there are other issues which we'll talk about there when you collect data from your customer, that's a whole different set of ethical questions, but the bias is not so much of a concern there. It's when you go out and trying to expand market and you're like," We're going to buy a data set from wherever." Do you really know what's in that data set? And is it actually going to meet your needs? And is it actually really biased? You just have to ask those questions, so part of it's just asking those questions, focusing on what are your goals? Is this data going to help create insights that are going to get you where you want to be, and get you to the targets and the audiences that you're looking to get to? If yes, great. If not, why not? And then mitigate that by figuring out what else do you need to put in there to make it actually work for you?

Marty Kihn: We had a whole episode on customer data platform, which is a foreign technology very, very hot topic. And one that myself and another guy actually wrote a book on called Customer Data Platforms. And the point there is that marketers now they feel compelled we'll say to collect more first party data. There's a lot of reasons for that structural reasons but they're like," I need more first party data, but I need to gather it with consent. So I need to have a relationship with my consumers where I explain what the value is and they give me data." There's a lot of discussions like that going on how to explain the value or demonstrated it or just get consent so I can comply with rules. So how do you think about collecting data from customers and that exchange of value and what is the nature of that relationship?

Sheila Warren: Well, I think it's fundamentally changing because I do think we've operated primarily in a notice and consent context. And I think we actually published a paper last spring to kind of just.... we didn't call it this although I argued maybe we should just basically like consent is broken because it's just the model doesn't work. I'm not going in and doing all my settings every single time on all my devices, a million.... I'm just like," I don't even want to." There are times that even I, and I definitely know better. I just go in there and I'm like," What do I have to do to, just move on to what I need to do." It's just not reasonable for any individual to bear the burden of all of that consent flying in their face all the time. It's just not reasonable. So our consent model is fundamentally broken. Now the good news is there are a couple of different models around holding data that are a little bit more manageable. So there's things like data trusts. This is the trust model concept. You have most a fiduciary that holds your data and you can apply variable permissioning to that data. So the trust holds your data, access is varied. The blockchain and smart contracts can actually help with this. So we're entering a technical world where 4IR technology is working in combination, are really going to enable again what I call variable permissioning. So I might say that my genomic data, which is the most personal data I can personally think of it's all kinds of things you could do with that data if you had it on me. I might say, anonymize it, aggregate it, give it to the City of San Francisco department public health because I want them to be able to run a longitudinal study on vaccines and what happens with that over the course of time. For some public health purpose I'm like," Take it and do whatever you want with it. Just keep me my identity out of it, but do whatever you want." That exact same data, I might be very likely to give it to a perfume company to make a fragrance that is going to be exactly perfect for my body chemistry or whatever. It's the same data, but I'm like," This purpose is great. That purpose, absolutely not." So we need to shift, I think away from this concept of siloed control notice and consent kind of data, it's not the quality of the data itself that needs to be managed, it's the use of the data that needs to be managed, it's by whom and what for that I actually care about. I think most people who are digital native, and certainly we're going to be the next generation of crypto native, do not have the same expectation realistically of privacy as a lot of previous generations do. Now we could argue all day if that's good or bad I'm going to just punt on that question. The reality is it's a reality, it's just what it is. Now that being said, they want to have more agency and empowerment over who they share things with and what is done with it after it's shared. That doesn't mean that they don't think about privacy in the same way, they're happy to have stuff out there. It's just the reason behind it and the context really, really, really matters. That's becoming increasingly true. And it's certainly true in other parts of the world as well. So we have to similarly shift our models around data, how we're holding it, how we're using it, how we're sharing it, how we're transferring it, how we're even collecting it into a model that recognizes that reality and is more fluid and flexible in terms of what we're allowing for subsequent uses on data. And I think that we're slowly getting there simply because of the volume of consent notifications is just so overwhelming that people have no patience for it anymore.

Marty Kihn: Does it have to be regulated or there's a debate. Does it have to come from the top or could it be marketers themselves proactively coming up with a, I don't know, an industry body or something like that or just best practice?

Sheila Warren: I tend to be more optimistic and hopeful about the possibilities of self- regulation when it's done collectively, individual self- regulation I think largely just doesn't work, because the incentives are just not really aligned and everyone's doing it differently, there's too much room for interpretation. But I do think when there's collective accountability and it's not even that you're holding your counterpart accountable in another company or whatever, it's that you're all agreeing kind of together as an industry, whether it's marketing industry or whatever it might be about the best practices and standards. I feel like that's actually very powerful. I think there's a lot of skepticism about it because it's like," Well, collectively isn't it even more powerful than individually?" But I actually think that there's enough differences across business models particularly if you cross sector and you leave, you don't look just at big tech, you look at tech and you look at financial services and you look at automotive or whatever it is, but there's all kinds of places where these issues are relevant. If you look across a broader sector, I do feel like you're going to get to a place where certain best practices logically emerge. And then perhaps what happens then is you codify those things. Maybe those get regulated upon, or they get pulled into regulation with some tweaks and changes here and there. But I don't know, I tend to be more optimistic about the power of collective action, and perhaps that's just because I work at the forum it's part of what we engage in all the time is thinking about collective action across sectorial or geographic boundaries. And my friend Trisha Wong talks about this a lot is this constant digital personhood. So who I was as a person was a very IRL experience, I was out in the world I was doing stuff, whatever. Who my children are, there's an online component to their identity that isn't distinct from who they are. There's almost a feedback loop there. And that is terrifying as a parent, but also can be very powerful because there are ways, their modes of expression that my kids can engage in online that are easier for them to navigate or that are unique, that they can't really get in the world. So I think, again, we're seeing it really powerfully already. Just the fact that my kids they'll never know a world where there wasn't a smartphone. They have no concept of what that was like, no idea. And it's even more than the landline or whatever, it's like the idea that you can have information at your fingertips. You can look something up because your device is with you and you can just find out what it is you need to know, assuming you have the critical thinking skills to assess what's nonsense and what's actually real. You could get facts or quote facts very, very quickly. So we're already outsourcing our calendars, our memory, so many things to our devices. And I'm sure we've all experienced that troubling, but real moment when you can't find your device. You're like," Oh my God, I can't find my phone. Now, what do I do? What do I do next? What's next on my... I have no idea." The connection is going to become tighter and tighter as the years go on.

Tina Rozul: It is fascinating, Sheila. So thank you for your sharing, your wisdom with us. We are still in 2021 and there are still a lot of unknown. There's a lot change in. There's a lot of pivoting that at the heart of everything that we do, people want to be better and they want to do good for their customers and their business. So what's a piece of advice that you would share with them, knowing that there may be biased data, segments, and sources that are fueled into their systems. There may need to be some mind shifts and checks and balances in place to make sure that they are prepared for this year and beyond. So what is a piece of advice you would share with them?

Sheila Warren: Yeah. I don't want to make this sound trite, but I really think it's about having empathy for your customers. So some of the places that I think marketing campaigns got into trouble is there was just a failure of empathy I feel like. But I frame this strategically around a couple of different principles. And one is just, are you being fair? Are you being transparent about what you're collecting, what you're doing with it in a way that you yourself would feel comfortable with and then you yourself needs to be a pretty broad set of whoever your intended and expected audiences is. Is there accountability in a system, are you giving people recourse if something happens that they don't feel comfortable with, they have a place to go. Then I think there's this question of are you really focusing on making your system trustworthy? Is that a core component of what you're building and what does that mean to you and your audience, and having articulated principles around what does it mean for our campaign or our whatever to be trustworthy and knowing what that means, it's context dependent. But I think it's really just about putting the thought behind all of this and making sure that you are assessing it regularly, that you're checking in with it, that you're tweaking things that have to be tweaked but you're prioritizing it fundamentally.

Tina Rozul: No, that's really good advice. And I think it's something that we all need to be reminded about in everything that we do, not just in work. So now we're going to go into a quick rapid fire question. So the first one is in your experience over the last year, what are your silver lining?

Sheila Warren: Well, 10 is my family, of course. And I think I've actually embraced my inner introvert, which I didn't know I had. But I've gotten a lot more comfortable I think I haven't had a social life. Like no one's has really had a social life, but I think I've gotten comfortable with my own company and that of a very, very small group of people. And I find that in some ways healthier and there's things about it I enjoy.

Tina Rozul: I will second that I feel I'm more grounded in solitude.

Sheila Warren: Yeah.

Marty Kihn: Are there any routines that you've adopted or that you always did every day?

Sheila Warren: The most relaxing time for me, which is going to sound crazy is putting my toddler to bed. I just love it. We just had this beautiful moment that it transitions my day because I normally end my work day about half an hour before her bedtime, we get that time to play, have dinner. And then I take her up and we do our book reading and our singing and our whatever it is. And I just feel totally grounded for, well, A, for the challenges bedtime with my slightly older children so number one, but it just is this really lovely transition that I will really miss when I've start traveling again. I will truly miss that.

Tina Rozul: That's awesome. And last rapid fire question, what advice would you give your younger self?

Sheila Warren: I would say keep following your curiosity. I just kept finding things that I was interested in. And then somehow I managed to turn them into professional... Managed to get people to pay me to think about them. So I think it would just be trust your curiosity, trust that you're instinct, there's something there that is worthy of more exploration, and don't be nervous about being a novice at something, don't be afraid to be brand new because there's so many things that are brand new that are so worth understanding.

Tina Rozul: And that's perfect advice for a time like now. Thank you, Sheila. This is a really great conversation. We really enjoyed having you on the show. Thank you.

Sheila Warren: Thanks for having me.

Speaker 1: Thank you, Sheila, for joining us on the show, we learned so much about the ethical governance on using data and AI.

Marty Kihn: Tina, it's really fascinating to think about humanizing marketing in a different way, and we encourage all of you out there to have empathy and ensure there are the right level of checks and balances in place to create meaningful marketing.

Tina Rozul: Absolutely Marty, thanks as always to our friends at WordPress VIP for partnering with us on this series. And thank you to our editing partners at Trendyminds and our friends, Conner and Sachin who really are working tirelessly behind the scenes to bring this all to life. Everyone see you next week in our final episode of this humanizing marketing series, as we talk about the future of marketing with our very own Salesforce futurists, can't wait.

Marty Kihn: It's going to be great, can't wait. Bye guys.

Tina Rozul: Bye.

DESCRIPTION

We won't fulfill the promise of artificial intelligence and machine learning without a robust discussion about the ethics of it all. As marketers start to use AI tools day-to-day, it's essential to understand the origin of bias in AI and machine learning models and how to mitigate the harm to your customers and to your business. In this episode, hosts Tina Rozul and Marty Kihn chat with Sheila Warren of the World Economic Forum. Sheila pinpoints the sources of bias and how organizations can implement checks and balances to ensure inclusive, authentic marketing.

Presented in partnership with WordPressVIP