Let's start with our first question. Why is AI transformation today important and different?
We need to do this level set because it is important to understand that there is a big thing that's over the top of the industry right now, which is that AI is having an exponential impact on organizations, it's having an impact on work, it's having an impact on industries, and of course, society as a whole.
This kind of impact is an important distinction because we have had linear changes in technology. We actually have quite a lot, maybe an overwhelming number of linear changes in technology that we have to tackle. But most of those are relatively safe to leave in the hands of technologists to lead.
In this particular example, when you have an exponential change, it's really critical for the business to take leadership. When I'm in executive boardrooms and we're talking about what are the business priorities for 2024, and what's going on in organizations mindsets, we're seeing consistently that the number one priority for them is understanding how can they take advantage of AI.
How can they address the opportunity, whether it's from the way that they operate, whether it's the way that they collaborate, the way they communicate across the whole of the employee experience, of the customer experience stack. This is one of those milestone moments where we have a really big change that's happening. And we've been through this before.
We've had other big changes, but most of those other changes have happened over generations, or they've happened over decades. We've had longer time periods, right? There's a huge amount of change that happened when we had the Internet websites. We had removal of presence and access barriers with mobile, and then even the mobile app revolution that followed that.
Even though those were big changes, they happened over decades, most people, and myself included, I think we all kind of predicting that AI is really going to have a big impact in years, not decades.
The pace of this change is much more significant. The second one that's really interesting, and we'll come back to this with a few examples today, is that what AI essentially does is it removes skill barriers, so essentially gives you skills that you may not have without AI's assistance.
I'm pretty good at PowerPoint, and I can create PowerPoint presentations. If you don't have that skill, AI could potentially give you that capability. I'm a pretty good communicator. I can write relatively okay emails and things like that. But at the end of the day, I'm not a linguistic expert. I don't have a deep literary science background.
Yet, with AI, I can have access to that. And those skills can be used in a variety of ways, not because I possess those skills, but because AI as a Copilot or an agent or a supportive tool set can help me with those things. This post-generative AI era is different from the era before it.
We do have a huge acceleration of AI use cases, AI application. We kind of figured out how to use the experience layer with conversational engines and more, and so we really have a bunch of opportunities in front of us. Now, this brings up a broader discussion, which I know we're all kind of working through, which is, what should we do about this?
If it's coming fast, it's important to learn, but why might it be useful to embrace and learn a bit sooner? I will pass it to you, Maya, because I know you spend a lot of your time with change management, helping organizations through those transitions with analytics and more. I'd love to see if you have any perspectives on this.
Absolutely. AI doesn't stand for an apocalypse of intelligence. It stands for artificial, or perhaps better yet, alternative intelligence. We need that alternative way of thinking. And throughout history, like you said, we've constantly demonstrated our ability to adapt to new technologies, and AI is no different. Right?
The rate of technological change is unprecedented. Like you said, what used to take decades now happens in a matter of years or even months, and it's time to embrace this new technology. It's here to stay, obviously.
The sooner we embrace it, the more time we'll have to harness its benefits. Like you said, automating routine and mundane tasks, frees us up for high-order skills. And we need to also consider the ethical aspects. I know that comes up a lot in data privacy and bias in algorithms, etc. But it's a new or relatively new technology in the post-generative AI era now, and we're kind of learning as we go.
But we do need to embrace it. We do need to adapt and work alongside it. I love that comment because there is this really interesting challenge, which is that today, it's not interwoven in the fabric of our organization. Maybe not our industry, and certainly not society yet, but we all can see it coming.
As it becomes more interwoven, the consequences increase. The mistakes that you make, especially if you cross some of those governance or ethical issues or things like that, there could be much more significant consequences there. So, I totally agree. I love this idea of, like, let's embrace it sooner.
We're going to make mistakes, but let's understand that to make those mistakes now is much less consequential than it will be in a few years. I think that's kind of where we kind of land. For our response to this question, it's important to embrace sooner and again, years, not decades.
It's really important to understand that this is going to be a big, tumultuous period that we're going to be going through, and it's going to be a ton of learning as we go through this. We'll have to all stretch together. Let's kind of look at this other angle of this, which is the way collaboration changes.
We've talked about productivity gains, and we've talked about some of the quality metrics, and then the importance of the data underlying it. But another interesting finding that we've had is the way it affects collaboration. There's this old analogy: one plus one equals three.
Basically, the idea is, if Maya, you have an hour and I have an hour, and we collaborate together, we'll produce something. On average, that's a better quality outcome than if you just had 2 hours or if I just had 2 hours. And that's the whole point of, you know, the sum is greater than the whole of the parts.
Now, there's some risks there, right? If we add too many people to that equation, the skills and expertise that we're bringing to the table that allow us to improve the quality diminish in value because we're just not going to maximize or utilize those skills and experiences.
There's a cost as you scale collaboration, and there's also a cost the second you have more than one person collaborating. What ends up happening is you have miscommunication, you have communication overhead and more. The reason we don't use everything, the reason everything is not collaborative in most organizations, is because of those costs, right? Because it's not always the best outcome.
From an ROI perspective, it's a return on investment. It's not as great. What's interesting is, in the Copilot experience, we're seeing two major shifts. The first one is that each individual has an agent or an AI, like a Copilot. Each person who's working with that, of course, is bringing their own experiences and skills.
But remember, they're also bringing all the skills that the AI agent has, as well as the knowledge the AI agent has access to. In that scenario, I'm able to do far more in a collaborative flow than I could ever do individually, especially when we talk about skills, digital skills especially, and how these tools give us more capabilities.
What's more is each person has their own Copilot, so you might use it in different ways in a flow of collaboration. Let's say right here, we're preparing an outline for an upcoming meeting. We're providing additional details, we're adding some additional information to it, we're turning it into agenda, and each of those sequences, we're each using AI, but we're using it in different ways, right?
The other participant in this dialogue is using it to add content and pull content from different data sources, whereas here, we're using it to do a lot of formatting a paragraph and then agenda work as an example. And so that sort of model of how we use it is important because it's both how we learn from one another, really, really powerful.
Learning from these pilots, with these previews, with Copilot, is the way people learn together through this collaboration flow. But it also changes, again, that value of outcome, because now we're able to produce more.
One times AI plus one times AI equals maybe a lot more than three. The other thing that's interesting is that whole collaboration flow we just did can also be summarized and can be actioned upon, right?
The data from transcripts to what activities happened, to even getting a sense of taking this result and combining it with another result of a different set of collaboration; all of these are wrapped around AI as well, for use cases and potential.
When we see that 60% baseline, Copilot adds more plugins add more. You take something like this effect on collaboration and it adds a compounding amount more that's already really interesting and worth thinking about.