By clicking β€œAccept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy for more information.
Insights

How Do We Tackle AI Compliance, Privacy, and Responsible Enablement?

January 18, 2024
7 min read
How Do We Tackle AI Compliance, Privacy, and Responsible Enablement?
Case Study Details
No items found.

Watch Video

Read Summary

  • There are challenges around responsible enablement, compliance, and privacy when implementing AI tools like call recording and data access. Organizations need to carefully consider these issues.
  • Solutions like Bing Chat take a privacy-focused approach where conversations are immediately deleted after the chat session. This avoids storing data that could be misused.
  • AI capabilities like Copilot can expose existing data security issues in an organization by making more data easily accessible. The solution is to fix those underlying issues, not avoid using AI.
  • When rolling out AI, organizations need to train employees on proper data handling and implement governance like sensitivity labels to manage access.
  • Rather than eliminating jobs, AI is likely to create new types of work around training, compliance, governance, and responsible data use. Organizations need to scale up expertise in these areas.

Read Transcript

How do we tackle responsible enablement, compliance, and the right privacy approach? A lot of this stuff is amazing, and there's probably some people even who might be listening to this and like, yeah, let's go.

But there are some challenges here, so maybe we could talk a bit about those and what you're seeing. Maybe you could start us off there in terms of challenges and how you guys are addressing them there.

This one, I think, is challenging because there are laws, but it's still open to interpretation. And I think every company is going to feel a little bit different about what they've already got in place, what is acceptable for them already from a privacy policy, and how this is going to evolve in an AI world.

I even think about this sometimes from just a practical perspective. Let's forget compliance and that side of things. From a perspective, the only way to use these AI tools properly is to have the data. So you have to get the data.

Let's say I'm thinking right now from a lens of, like, phone calls and conversations that we're having, even a customer meeting; recording every call is easy. You click the button, and it's done. But you have to ask the other person generally if it's okay if I record the call.

Are you comfortable with me recording the call? And just even that at the outset, it's not like a motion that is so mainstream today where everybody expects every call they're on to be recorded. I think there's just some nuance there around how that may affect a relationship.

There's some awkwardness there when that kind of thing is going on. The compliance privacy piece. I'm not a lawyer; we'll deal with it however that has to be. Organizations, their HR teams, how they set this up, and their legal firms and whatnot will work through that.

We haven't gotten too deep in those yet. But I'm just thinking about from a practical perspective where we want to grab all this information. The basic opt-in and starting point.

I think there are a few things we've seen right to your point. We've seen solutions like Bing Chat Enterprise is a good example where their whole solution is really simple. You open the session, whether it's in the Bing sidebar, in the Edge sidebar, or if you just open it and the second you're done, it's gone forever. It's deleted, it's not stored, there's no training of the data, there's no storage of the data.

It's a really safe model because essentially, you choose when to use this AI tool to generate some images, do whatever you want to do, and the second you're done that, all that data is wiped forever. It's not available. There's no history, there's no audit.

That can work well for a simple getting-started scenario. But then it becomes something like Microsoft Copilot, which we were talking about where I'm doing, like, hey, what's been happening across those stores in this region? And I'm gathering all that data.

In those examples, the audit is really critical because we want to understand how people may have made a decision in what they used, what queries, prompts they might have used for that. Auditing and what is recorded is actually really important to understand, as well as when to do it and when not to do it.

Similarly, I think one of the key foundational principles with AI should be that it doesn't necessarily give you a capability, especially a right that you don't have already. If you already have access to a bunch of assets, of course, surface those assets to the user in the flow of their work.

When I ask about what was the latest about Judy, in my previous example, I'm asking it to only access the data that I have access to. It won't allow me to see information otherwise.

Similarly, when we look at that same data scenario, what ends up happening is if I'm looking at a person who has an organization that has a lot of holes in their security policy, which happens a lot.

An example of that is this: This is one that always causes a disruption in every customer, when we do these previews of Copilots or pilots. You see these shared with everyone. They are kind of like links, share with everyone in my organization, so share with everyone in 2toLead or share with everyone in Xero.

People use these links all the time. But the second you use that link, technically, if I do a search, if you're using Microsoft Search, you will find that content now, right? Everyone can find that content. Well, the reality is these links, as an example, one of many, are used a lot for really secure content, right? It's not intentional.

Someone just doesn't know what that link means. They're just trying to quickly share it. But now they've created a breach; they've created an exposure point. It may not have been as prominently visible before because not everyone uses search, but when you use Copilot, it will find that data very rapidly.

Because it finds that data, it means that for us to responsibly enable Copilot in an organization that has not planned sensitivity labels, they don't have a strategy, whether it's purview or something else, they haven't figured out this stuff. Then SharePoint, advanced management, whatever, you're going to run into these problems because now this risk exposure is more visible.

It's not that it wasn't there before. It's the same risk, it's the same problem we had before. It's just now it's made more visible because of AI tools. And the trick there is not to stop Copilot from being rolled out.

The trick is to solve that underlying risk that hasn't been addressed for years. Because just ignoring it and saying, oh, we just won't roll out Copilot, or deferring it doesn't really solve the very real risk which exists already.

Like you're saying, capabilities of Copilot just expose the fact that it's already been there and now that the hole is easy to access, well, we got to fix it properly.

I think there's an opportunity for this technology as it's coming through to fix some of those gaps or holes or whatever that exist in organizations to make everybody better and maybe more prepared for the future.

I'll kind of finish with this piece as you're talking because I'm not an expert in this space. But there are three components here.

There's the legal compliance component. We got to train everybody how to use these tools component. The example that you're giving with the link, if you don't know how to use this stuff, well, I can give you access to my OneDrive inadvertently, and I didn't realize that you have all these documents through that link.

There's that piece and then there's the administrative link or the governance piece to make sure that these things are being enforced, tracked, and tweaked. People talk about AI, that it's going to eliminate all these jobs and eliminate all these people.

I think it's just going to create more interesting work for folks to do, but going to have to be trained up and scaled up to use.

Need Help With Tackling AI Compliance, Privacy, and Responsible Enablement? Let's Talk
Case Study Details
No items found.

Similar posts

Get our perspectives on the latest developments in technology and business.
You will love the way we work. Together.β„’
Next steps
Have a question, or just say hi. πŸ– Let's talk about your next big project.
Contact us
Popular insights
One of our goals is to help organizations build a better digital workplace experience.
Access knowledge center