Today I have a question that comes up regularly in my work, and that question is a really important one. Is AI good for work, or is AI bad for work? And this is a question that is hard to answer because, in theory, we could just say, oh, it's good for some people, and it's bad for others. But I think it's more complex than that.
And I thought it's worth having a brief discussion about the things that I talk about with other leaders, other workers, and other people that we work with as they navigate this change. When we talk with leaders who have departments or they have teams that they manage, they themselves are trying to grapple with AI because they care about their teams. They care about the groups of people that they work with. And they want to understand if this is going to be good for my teams or is it going to be bad for my teams and my departments. I have leaders and organizations who are asking, hey, is this good for our business, our industry, or is this bad for our business and industry? So it is a complex question to ask.
I would start by explaining why AI can be quite bad for working with. What are the challenges with AI in the work that we do? What are real-world examples of challenges we run into? Now, the first question I always get from pretty much everyone, whether they're a business leader or an individual who's working in an organization, is they say, is AI going to take my job? Is AI going to take my work away? And the truth is, AI is not going to necessarily take your job. What's going to happen is people who know how to use AI and who I know how to work with AI they're going to take the jobs of those who do not.
So then it becomes a question of whether you are going to learn AI. Are you going to learn how to collaborate with it and use it in your work? Or are you going to defer or delay that to potentially your detriment? The second part of this, and it is an exhilarating and terrifying prospect, is what is the right pathway and when should I adopt AI myself, in my organization, and in my team? And this is a tricky thing because, to be honest, mistakes are going to be made. And if you use something like this, if you change things and use AI, those mistakes could have pretty significant consequences. But just like before, there is an easy answer here.
The truth is it's great that OpenAI and others are making AI accessible to more people. Because while it is terrifying and exhilarating, while it is those things, it's also something where today, the scale and consequences of those mistakes are much less than they will be in the future. And that's important because we're all going to make mistakes, and we're all going to learn whether we're leaders or whether we're individuals.
Now, it's also important to understand that AI is a nonlinear exponential technology, and it is growing very rapidly, as are the use cases and how we use it. So there's also an argument that getting involved and using AI and learning it now isn't just better because, again, the consequences are less because of how less interconnected and interwoven it is in our businesses and our work, but also the consequences are a little bit better because it's easier to grasp. Yes, it can be daunting, but you could learn a lot about how AI can be used in your work and in your discipline area today.
It might be much harder to navigate that in the future when the ways in which we work with AI become more and more rich and robust. So I think that's another reason to potentially motivate it. So the first I think takeaway that we're getting here is, hey, maybe it's really important for us to all adapt and learn and dig into AI in our work today as soon as possible.
And I think that that's true. I think that's a pretty safe assumption. Now let's talk about the other challenge with this, which we mentioned earlier.
Is AI going to take my job? And the truth is, AI, as we mentioned with another collaborator, is going to take your job if you're not invested in it. But what if you do invest in AI? Is that still going to reduce the number of jobs? Does that mean that I might lose my job because I'm not in that category of people that are not laid off in that scenario? And the truth there is it's much more complex than I think people realize in our space. We work in employee experience.
We work in digital technologies like collaboration, communication, and Microsoft 365. And while Copilot and these tools are going to improve and be accessible to everyone, the reality is they're going to provide support and offset especially certain tasks where you become two times or more times effective than you were previously without AI support. And we see this today.
If you look at Copilot for coding, right, Copilot has been around for a while, and we look at the data there, we can see very consistent results, things like two or 2.5 x more efficient or more effective coding and efficiency, higher quality product, right? We can see that being done in less time with the support of AI and those who don't use it. When we do those tests, they have a much longer time making they're less complete in their work, and arguably, the quality is less as well.
And so, in that scenario, it's important to understand that there's this really important cost premium that we have when work is split across multiple people. That alone is daunting. Because if, hey, if someone can do two or two and .5
times the amount of work, doesn't that mean that there's going to be less work for all of us? Well, maybe not. We'll come back to that. But at the end of the day, it does mean that when you have so much interconnected work, where we work with collaborating with others, where so much of our work today is split across two or more people, well, that has a bigger impact.
Because when you take one person, and you make them two times more effective on work that was split across two or three people, what you're actually reducing is communication overhead. You're reducing miscommunication because they don't need to work across as many people. And that has a much higher premium and a much stronger value proposition.
That could lead to three or four times the cost, efficacy-wise amount of work being done. And that can lead to some significant job loss in the marketplace and even in your own organization. And so this is something that should be warranted to be worried about.
And yes, adopting and embracing technology now is going to help you be more successful and less affected in the near term. But this creates this other underlying challenge. Now, as I mentioned, with each of these challenges, I want to provide at least a positive perspective.
And the positive perspective is that in the past when we had a challenge like this, it led to some pretty positive outcomes. If you take something like the way technology had changed organizations, the advent of the press, the printing press, or pen and paper, arguably even before, that led to the spread of societal literacy. And that led to new demands, the demands that most of us, many of us are employed on addressing today in information work, in work within offices and places like that.
And as that evolved from the printing press to typewriters, it created new opportunities. Yes, it displaced some work, but it led to new things like the spread of systematic management. And that led to standardized documents, and standardized documents led to new logistical and management roles and jobs.
And if we look at the spread of information management with computers, well, that led to electronic documents and digital work. And if we look at the introduction of the Internet, the introduction of mobile apps, and mobile access, those things led to the removal of communication barriers, the removal of the need for a presence in our offices and in our work, and fewer access barriers. And yeah, there are consequences to each of these, not just in jobs, but there are consequences in society, right? The always-on issues that come with that.
But it led to a lot of really positive things like flexible work and remote work. And it led to the ability for us to be able to benefit from connections and discussions with family and friends from far distances or being able to stay in touch with colleagues and work with people in specialized industries and things like that. That led to, again, I think, an uplift for society and people overall.
And with AI, it is something that's going to lead to, as I mentioned, most likely some job loss. But it's also something that removes barriers to skills. A lot of times, we think of how much work maybe I've spent learning PowerPoint.
And learning PowerPoint is not something that's going to be as important later. Because of the APIs in Copilot today, the way Copilot works with these tools, it can use APIs and understand them, and it can use them better than you and I can. And so that means that over time, those 200 or 2000 different commands that exist in PowerPoint, it can design and build better PowerPoints than I can.
Now, I'm still going to be a collaborator in that. I'm still contributing to it. I'm still benefiting from that collaboration as well.
But the reality is that tool this AI is going to potentially do many of those tasks better than I can. And that means that a lot of these task-based skills become less and less important over time. And that leads to new opportunities.
In the same way that the lead for societal literacy created new opportunities, this led to a reduction in skill barriers, hyper, personalization, and the importance of perspective. These are all things that lead to new opportunities with AI. So AI is kind of bad for work, depending on where you are. But there are ways to make it a little bit less bad for you.
And there are certainly ways that it could be really good in the future for work. Now, the other thing that I want to talk about just before we kind of close this thought is, is AI bad for work? Because I've heard that AI does some things pretty poorly. And I think that this is a really interesting and important one because today, algorithms are ubiquitous, and they're used in our everyday life.
And I think it's fair to say many of them are racist or sexist, or gender or trivializing, or they stoke division, and they amplify bias either intentionally by malicious actors or because of the data on which they are trained. And this is something that is important to understand. But to be honest, all of us are trained on data, aren't we? Humans are trained on data sets too.
And I think we're starting to recognize the unacknowledged risks that come with our data that we either learn ideologically wise within our own families and within our societies or within our own organizations. You'd be surprised at how many organizations, when we start to use these AI tools, we actually because we're careful about this now, and we look for responsible pathways, we find that there are biases in the data, leading to a reduction, hopefully, of risks of biases in things like recruitment and other places where we might use these tools. And I think that that's an advancement because AI is not self-aware, but AI is making us more self-aware.
And as AI makes us more self-aware, it allows us to think about false binaries like male or female, masculine or feminine, straight or gay, black or white, left or right, us or them. We think of these false binaries because AI doesn't have a sense. It's non-binary. It doesn't have a sense of one or the other, right? That's the data that tells it what to understand and what observations to make. And if we can think of how we can get rid of these false binaries in our own work and our own lives, that can typically lead to an improved result for everyone.
Now, the last one that I wanted to talk about is whether AI is good or bad. And I think with that AI making us more self-aware, I would say the long term is AI is probably very good for work because it should lead to more inclusive and better work representation and a lot of less risks for bias. But in the short term, AI is kind of bad for work if we're not being responsible with our application of it. The last one I want to talk about is hallucinations.
And hallucinations are a problem with AI today. If you've never experienced this, it's really easy to test yourself. Go and type into chat GPT or something like that.
Type in, create a poem for me, or create a song where every letter or every word starts with the letter E. And as you do that, it'll create something. And you'll find, even with GPT four, you'll find that it doesn't always do this, right? Sometimes there are other things in there, and what you can do then is you can say to it, hey, did you meet the assignment? Because I asked you to create a song where every letter, sorry, every word on every line has the letter E to start, and it'll tell you, oh, I made a mistake.
And this is an interesting thing because new research is showing us that simple things like adding reflection, right, I'm not going to get into the impact of storage and other things, just adding reflection to things like GPT and other models like this, it adds a lot of value because it really reduces the number of hallucinations. And so, do I think that AI is bad for work? Sure. Because AI can make mistakes.
And if we don't know that it can hallucinate and we don't know how to, in the short term, help it correct itself, help it double-check its work, then there's a good chance that we're going to make mistakes. And as I mentioned, while they're a lower risk, lower impact today, they're still mistakes. We want to avoid those.
On the other side, AI is quite good because, over time, if you don't know, we make mistakes all the time. This hallucination thing is a bit AI-centric, but it's also something we do a lot of times. We use the information we know.
And we make an assumption. We say we think this is what is true. We often misremember things as well.
And so while it's not identical, it is true that if AI makes less and less and less and gets so effective that hallucinations are such a rare occurrence that we can address them very easily, then it means that AI can make a lot less of these kinds of mistakes than we make ourselves. And again, that just leads to AI being better for work instead of worse for work. I leave it up to you.
Because the truth is, whether AI is good for work or bad for work is up to us. If we embrace it and adopt it and learn about it, and we help navigate this journey together, AI is going to be very good for work. If we don't do that if we leave it in the hands of technologists or we defer it, and we say we're going to deal with it later, we're going to learn about it later, then that's very bad for work.
So I hope this has helped you navigate this really complex discussion. Is AI good or bad for work?