AI Tools and the Cloud
One of the biggest trends I’ve noticed lately—both while working with clients and walking conference floors—is how often your data ends up in an AI company’s cloud. Their justification is almost always the same: “It’s cheaper for you, and it helps the intelligence learn your system and improve.”
My response is simple: be very careful.
For example, if you’re using Copilot to help correct or write code, make sure you strip out actual table names, server names, credentials, or any identifiable architecture details. Never include result sets that contain real data. And be extremely cautious when using AI tools that may pool or index your information in ways that allow someone else—or some model—to ask a question and discover an answer you never intended to expose.
Right now, my biggest concern is how Copilot, Teams, and other integrated AI assistants access files that may include sensitive employee information such as salary, benefits usage, personnel reports, PIPs, or annual reviews.
Audit Access Before You Enable Features
When implementing these tools, review every data source and every endpoint that could become accessible. Nothing should be assumed safe by default.
You may even want to build a new step into your annual audit:
- Review all AI-connected data points
- Reconfirm which endpoints AI tools can reach
- Ensure they haven’t quietly expanded their footprint into areas containing protected or regulated information, especially information stored in someone else’s environment, where your data may sit beside data from other companies.
You want your data straight, not on the rocks being stirred with someone else’s data. You need to know whether your information is siloed, who can access it, why they have access, when they use it, and whether it can be securely deleted the moment you request it.
If you’re going to drink the KOOLAID—drink it straight and keep your eyes on your glass.


