How Einstein GPT Brings Generative AI to CRM
Michael: Welcome back to Blazing Trails. I'm your host, Michael Rivo from Salesforce Studios. Today, we're talking about the launch of EinsteinGPT, which is being announced at TrailblazerDX 2023, that happens to be happening right as we release this episode. So, we're going to learn how EinsteinGPT is going to open the door to the AI future for our customers. And here to show us the way is Jayesh Govindarajan. He's a Senior Vice President of Engineering at Salesforce, and leads our artificial intelligence and machine learning efforts. Welcome to Blazing Trails, Jayesh.
Jayesh: Thank you, Michael. It's a pleasure to be here.
Michael: Okay, fantastic. Well, you lead our machine learning and AI practice here at Salesforce, so you must be pretty busy right now. Can you tell me about your role and a little bit about your background?
Jayesh: Sure. Yeah, it's been pretty busy with all the news around GPT and such. A little bit about myself. I came to Salesforce about seven years ago through an acquisition. I had a AI machine learning startup, here in Palo Alto, that got acquired. And part of what we built started off our Einstein initiatives, which is building AI and machine learning into the fabric of CRM. My current role, I build AI and machine learning systems such as chatbots, Einstein for Sales, Einstein for Service, and also currently lead our generative AI efforts as well.
Michael: Fantastic. Well, there's so much to talk about. And everybody's talking about generative AI right now. There's lots of excitement. Businesses are looking to understand how to integrate it, but not sure how to go about it. So, can you tell me a little bit more about Salesforce's approach and how we can help?
Jayesh: Our focus at Salesforce, pretty much everything we've done with AI has been how do we bring a lot of these technologies into Salesforce, in a way that can be effective for Salesforce trailblazers, all the way from current executives and sales functions, to our service reps, to marketers, to developers, the several million developers that build on the Salesforce platform. The question we are asking is, how do we bring these technologies in a safe, secure manner to Salesforce trailblazers, make them 10x more productive with these tools and technologies? So very much a focus on work, workplace. How do we make work more efficient?
Michael: Yeah, I mean, I think everybody's thinking about, okay, how is this new GPT going to impact everything? I really haven't seen a media swarm like this, around a technology story, in a long time. And it's so interesting, because you're in the heart of it. And so, I wanted to dig a little bit deeper. As I understand it, EinsteinGPT, in combination with our data cloud and integrated into all of our clouds, as well as Tableau, MuleSoft, and Slack, it's going to open the door to an AI future for our customers. What does that mean?
Jayesh: One aspect of how we are thinking about bringing generative AI is what impact can it have on CRM. There's a bunch of use cases, some of which you mentioned, across making salespeople more productive, service professionals, having the ability to automate some of the conversations. Analysts on Tableau can do 10x better with this technology, with all the ability that it has to write code on their behalf. And the same thing for developers as well. In addition to just bringing generative into CRM is, how do we bring it in a trusted, safe, secure manner? A lot of what we've seen with ChatGPT and other systems are extremely powerful, built on public data sources. But a lot of this, to work in the enterprise, has to be grounded in the data available in that organization. For the generative experiences to be meaningful for a salesperson trying to, for example, get a summary of an account that they're working on, it needs to blend public data of that company that they want to research with what's available in Salesforce, in CRM. So being able to blend these two together is sort of what makes this a more trusted, more valuable experience for our customers. In addition to generative trust is, I'd say, personalized. I have an example here. A couple weeks ago, my boys turned 17 and we are crazy Avengers fans. And I was trying to get them a first edition comic of Thor, and the merchant couldn't ship it on time. And we got this somewhat lean, sorry note, saying, " Sorry we couldn't ship it in time." I read the email, I was slightly disappointed. And it got me thinking. Imagine that email was written in an Asgardian language, pretending to be Thor. Okay? I still didn't get the comic, but it would've been really memorable. It could've been something that we could have talked about. And frankly, this is what we can do because we have the information about the Customer 360, in the system. If I had purchased stuff from that vendor, they would have my C360. They know what I'm trying to purchase. So, using generative and applying that personalizability to generative context and content that gets created, is going to be extremely powerful.
Michael: Yeah. I mean, it's fascinating. And I wanted to drill down into the data story a little bit. I mean, in this new world of AI, nothing is more important than data. It's the foundation of all of these experiences. So, how is our data cloud going to unify all of a company's customer data across these channels' interactions?
Jayesh: Great question. I think the revolution that we are seeing of sorts here is still based on three foundational things that have been growing super fast. One is just the compute, the ability to do big compute jobs. The second is algorithms. New algorithms, transformer architectures have emerged. And then, the third thing is data. We have not been able to train on every single bit of data that's available publicly until very recently, and that's why a lot of the large language models take off. I think the next phase is going to be how do we rate that data and blend it with the data in a customer's, or with their permission, in a trusted manner. And second is this notion of grounding. As an example, imagine that you want to write an email about blazing new trails. You want it to write an email based on data and trailing. For that messaging and the content to be meaningful, you want to ground that data to where it can form output that is usable. That is also part of our data cloud story. That data resides and exists in our cloud.
Michael: Right.
Jayesh: And then the final layer, the reinforcement learning with human feedback layer, which, is as you use the system, how do you collect that information on usage? What is a good generative output? What is not a good generative output? What generative output needs to be edited before sending it out to the customer? That's a very, very valuable reinforcing signal.
Michael: Yeah.
Jayesh: And again, building this into the Salesforce stack means we can lean in so much more into that, and use that to enhance the overall generative capability that we produce as products. But also, push that into the lower layers, or the large language models, to make them more efficient.
Michael: Yeah. I mean, I think that's such an interesting point because we're seeing that right now with the examples of releasing these models into the wild, and seeing the results are not always what people expect. And now, you're talking about customer data. You're talking about interactions with your customer. It's so important that those are right. And how should companies be looking at how they implement this, whether it's in Salesforce or not, to monitor and follow along that process so that they're not making mistakes?
Jayesh: That's such a good question. I mean, from a Salesforce perspective, two things that really work in our favor. One is just the nature of generative AI in the workplace, how it's used, how it's crafted, is not open domain or open- ended.
Michael: Right.
Jayesh: Which means there are some natural guardrails. For example, if you use generative text to write an email to a customer, that really saves you a ton of time. But you're not there to break the system or to engage in chit- chat, like some of the consumer applications.
Michael: Right.
Jayesh: The second aspect is, frankly, our expertise. We've been doing this for eight years now and we have done this at scale. Our expertise is in building these systems at scale with trust, with permission from our customers, and finally, build these design experiences that are very much human- in- the- loop. These are systems we build to augment experts in every domain. And as they use the system, the system gets better. We have ways to store that data in a safe way. We have ways to use that data to retrain models in a safe way. So, for us, this is a next sort of evolution. And every learning that we've built over time, on building these scalable trusted systems, is something that will come to bear for things like human- in- the- loop.
Michael: And this isn't new for us. I mean, as you mentioned earlier, you came to Salesforce developing AI work, and AI machine learning has been a part of our core product strategy for almost a decade. In fact, I was looking at a stat before this, that as of September 2022, Einstein has generated 175 billion predictions every day right now. So yeah, we've been doing this for a little while and we're seeing this rapid growth now. I'd love to get a little bit of a behind the scenes view around the innovation here, and you working on it from the beginning. Something that you and your team have been doing over time, and now seeing this grow, it's got to be pretty exciting to be here right now with this?
Jayesh: Oh, yeah. Absolutely. There's no other place I'd rather be. We've asked, how can AI come and help automate some of the tasks that are repetitive in nature? For example, if you crack open our Einstein for Sales, you'll see lots of AI components that are very tailored to the task at hand, which could be you're trying to close a pipeline and you want to figure out what order in which you want to reach out to your customers. We have a scoring function that does that for you. You have recordings of conversations with your customers, and we have the ability to extract coaching moments on that so you can be a better salesperson. We extract knowledge in the form of how to solve a case from a KB article. So, a lot of these systems use advanced NLP techniques that we've built and brought to bear to our customer base. And I think, that journey, the one things that's exciting for my team and me is that this puts a lot of those efforts on steroids. So yeah, I think the exciting journey ahead for us is to use the expertise that we've built around building these automation, assistive, and optimization systems tied to job to be done, and then use this new technology to drive it forward even faster.
Michael: Yeah.
Jayesh: And that's kind of where we are starting. We're saying, what are the foundational generative functions that we can build on top of these large language models, and make them available to different personas that we make successful across Salesforce? As an example, completing an email is a foundational task. If you train it on sales emails, it will write great sales emails.
Michael: Yeah.
Jayesh: If you train it on service emails, that are about fixing a modem, it can generate great responses when you ask it a question about fixing a modem. These are foundational concepts that we can bring to bear across our clouds.
Michael: Yeah.
Jayesh: Moving a little bit towards the other part that is sort of close to my heart, which is, how can this help engineers, like myself, the ability to write great quality code in a consistent manner, so every piece of code is not a special snowflake, is humongously powerful. Helping our customers write high quality apex code in a standardized manner is going to be hugely powerful. Using the same generative technology to enable a Tableau user to generate a dashboard without writing code, sophisticated dashboards without writing code, is going to be phenomenal. So, we're going to try to touch upon each of these areas, but do it in a way that we build the foundations first. And then, these reusable components that we can then expose to the developer community, both internally to Salesforce and the rest of the clouds and our partners outside.
Michael: Yeah. It does feel like we've just reached this inflection point, where you're going to see such an acceleration across the board. And I've been thinking about it as it's kind of solving the blank page problem.
Jayesh: Yes.
Michael: As a content creator, you sit down in front of the blank page and you have an idea, but how do you start? And we've all been there. So, I think it's incredibly powerful to be able to put that idea out as a question or a statement, and then get something back and then work from there. It's an interesting debate as to whether solving the blank page problem yourself is ultimately better or not, but I think it's going to change for sure.
Jayesh: Yeah. I mean, it's a great question. I think the answer is a classic, " It depends." I love it that when I have to write a generic email, that I can give it really high level direction and have an entire conversation about, you know. And a set of emails that back and forth to schedule a meeting is something that I don't want to spend a whole lot of my thought process on.
Michael: Absolutely. Right.
Jayesh: That is also content creation. But on the other hand, it frees up time for me to do the creative parts, to write the high quality document that pens down my thoughts on something, which machine cannot do today because it simply doesn't have the right level of training data to compose those kind of original thoughts. There are many, many tasks where you can verify and refine a lot faster than you can create. And an email is a classic one. I can look through an email in 10 seconds and go, " This is good to send with a few edits." And the same with code. I can test the code to see whether it works. So, I think there's a lot of these functions where the creation process is long and arduous. But the verification and the supervision around that verification can be faster.
Michael: I'd love to do a little bit of lightning round on some use cases. So starting with service, can you talk a little bit about super- powered chatbots?
Jayesh: Chatbots, today, can take a lot of requests that come into the contact center and respond when the dialogue, back and forth with the chatbot, is very programmatic. So, when we design chatbots today, it's a process to design it, because you want the data that goes back and forth to collect information, that then gets converted into an action, is handcrafted. Imagine that you could take a bunch of conversations between a customer service rep and a customer, and just take that data and say, " Go create a bot out of this."
Michael: Mm- hmm.
Jayesh: Right? So now, you have the ability to go build a bot based on a few real conversations, without having to go through the programmatic process of building that. Of course, you have to test it.
Michael: Right.
Jayesh: And that's why we have humans and experts- in- the- loop. But that frees up an amazing amount of time for the agent to do what really needs a human touch, which is really complex problems. Or there is an emotional connection that needs to be built with a customer to make sure that they're doing fine.
Michael: Okay, how about auto- generating knowledge articles?
Jayesh: Today, what we do, even without the generative piece, is we'll find the right content. And based on that case, we will surface that to you so you can teach yourself and then resolve the case.
Michael: Yeah.
Jayesh: Right? The challenge is it's expensive to write high quality knowledge articles. So, most customers that don't manage knowledge well, because it's expensive to do so, do not have a high quality knowledge base that they curate, keep updating over time.
Michael: And I imagine this is really common.
Jayesh: Absolutely. It takes time to create high quality content.
Michael: Yeah.
Jayesh: But imagine that you could do that now based on a conversation that you had, that you used your skills as an agent to solve a case, and all of that back and forth that you had with a customer to go solve a case is in our system, imagine you're able to create a knowledge article based on that. What that does next is that every other agent is now an expert, because that knowledge article is going to show up when that case shows up next time.
Michael: Right.
Jayesh: Even better, if you can train a bot based on that knowledge article, which we can, no other agent may ever see the case because it will be resolved by the bot which got trained on the knowledge article, right?
Michael: Wow.
Jayesh: So there's a whole flywheel effect that it's going to come to fruition, because we are able to power generative experience, writing high quality knowledge articles based on context.
Michael: Right.
Jayesh: Which is in Salesforce, based on conversations that happens on the Salesforce platform.
Michael: Amazing. Okay. Fast- track Case Swarming.
Jayesh: Case Swarming. Interesting. So, similar but different. Case Swarming happens on Slack. What's the first thing you do when you have a P1 issue, that your customer service agents need to jump on? You create a Slack channel and everybody starts piling in ideas on how to fix it. This is kind of the whole swarming concept. What if you could take all that intelligence, from all those conversations that are happening on Slack now, to do exactly what we did before, which is go create a bot out of that? So that next time something like that happens, that it's resolved at least to a 50% degree, before we create a new Slack channel.
Michael: Yeah. And I think this goes back to what we were talking about earlier, about data at the core. So all of this data that's getting collected right now, and then understood, interpreted, and then creating out of it. And that's why the data platform in the center is so critical.
Jayesh: Absolutely.
Michael: And that-
Jayesh: One more interesting use case that we are working on in the sales side as well, which is creating, from a prospecting perspective, creating a really high quality company summary. It's such a time- consuming task.
Michael: Yeah.
Jayesh: Some of the best salespeople spend hours doing that. And some of that data's in public domain, some of it is in the Salesforce database. So, being able to construct that high quality piece of content, and then having constructed that, you can just go write an email to such and such person about this new product that I want to talk about. And that data for the new product also exists in Salesforce. So, just making that process 10x faster is something that will make salespeople extremely productive.
Michael: So, it kind of brings me to a really basic question. Why is Salesforce positioned to succeed when it comes to generative AI? What gives us a great position to help our customers?
Jayesh: One, our deep focus on jobs to be done, and our deep understanding of jobs to be done across a Salesforce developer ecosystem, all the way to a sales service marketing professional. All the AI systems we've built to date is built on that understanding. The second is, when you solve for a job to be done, it is not as onerous to ask customers to give you training data for when the generative system doesn't produce the right output. You're not asking people to label data, you're just saying, " Make edits and use it the way you want to." Right? So that human- in- the- loop cycle, the reinforcement signal that comes just as people are using the product, is hugely powerful. And that happens on the Salesforce platform today, right? In the large language generative model parlance, we call this reinforcement learning with human feedback, except we can do it without building any special training system, by just picking it into the use of the product. That is hugely powerful and we have the DNA to be able to go build this. There's going to be a lot of education, storytelling aspect to this as well, to get this adopted, so people understand that there are guardrails being built, and that it's safety in adopting this technology. And a lot of what we are doing today is part of that too. And I think these are elements that are uniquely Salesforce, in that we have the ability to bring these disparate teams together, cross- functional teams together, to define partnerships, policy, build the product, get the product deployed.
Michael: So, it's going to be amazing to see how this comes together in the Salesforce platform. It's going to be awesome. And I just have one last question, which was, for others in the industry, what sort of leadership lessons can you share, leading a team that's working in this?
Jayesh: Great question. I'd say, two or three key takeaways for me have been, getting large scale AI adopted is very much a function of product design, science and engineering coming together. And I think it is cross- functional by nature. I'd say, with the new technologies, I think this policy, partnerships, which are going to become equally important. So I think leaders that are looking to make an impact need to recognize that there's going to be elements of policy, education to the customer base. Having a clear sense for where it's going to be safe and high value versus where there could be compliance requirements in need and high value. And then, being mindful about approaching them differently, I think, is going to be super important. Second, as with any new technology, I think you want to be optimistic, but you also want to be a realist when it comes to adoption and implementation. So being able to start and focus really on how to add customer value with this new technology, being optimistic about it, but not falling in love with it, falling in love with the problems that you're trying to solve for the customer, I think is going to be important, as with any new technology. And this is going to be the key thing. And finally, I think industry- wide, I feel like there is a need for leadership in making these systems more widely available. And by that I mean, these are expensive systems. Every prediction takes a lot of compute today. That token that gets generated isn't cheap, and there are sustainability implications to it. And I think a lot of leadership that needs to come to bear is, how do we make them smaller, cheaper, faster? Because it's both a business imperative, from a margin standpoint, to be able to do this efficiently.
Michael: Right.
Jayesh: But it's also sustainability play. If you're able to do this with lesser compute, it's the right thing to do, but it's also the smart thing to do.
Michael: All right. Well, Jayesh, this has been a fascinating conversation. Thanks for joining us on Blazing Trails today.
Jayesh: My pleasure. Thank you for everything, Michael.
Michael: That was Jayesh Govindarajan, Senior Vice President of Engineering at Salesforce. To learn more about EinsteinGPT, head over to Salesforce +, where you can sign up for free to see keynotes, product demos, and select sessions from TrailblazerDX 2023. That's salesforce. com/ plus. And be sure to subscribe to Blazing Trails wherever you get your podcasts or on the Salesforce YouTube channel. Blazing Trails is a production of Salesforce Studios, produced by Courtney Elton, edited by Cynthia Chavez, with original music from Andrew Duncan. I'm Michael Rivo, thanks for being with us.
DESCRIPTION
Today we're talking about the launch of Einstein GPT, which is being announced today at TrailblazerDX 2023. We are going to learn how Einstein GPT will open the door to the AI future for our customers. Here to show us the way is Jayesh Govindarajan, SVP of AI and Machine Learning at Salesforce.