And since the agents speak English, there are endless tricks people will try to trick the AI. “We do a lot of testing before we implement anything, and then we monitor it,” he adds. “Anything that’s not correct or shouldn’t be there we need to look into.”
At IT consultant CDW, one area where AI agents are already being used is to help staff respond to requests for proposals. This agent is tightly locked down, says its chief architect for AI Nathan Cartwright. “If someone else sends it a message, it bounces back,” he says.
There’s also a system prompt that specifies the agent’s purpose, he says, so anything outside that purpose gets rejected. Plus, guardrails keep the agent from, say, giving out personal information, or limiting the number of requests it can process. Then, to ensure the guardrails are working, every interaction is monitored.
“It’s important to have an observability layer to see what’s going on,” he says. “Ours is totally automated. If a rate limit or a content filter gets hit, an email goes out to say check out this agent.”
Starting with small, discrete use cases helps reduce the risks, says Roger Haney, CDW’s chief architect. “When you focus on what you’re trying to do, your domain is fairly limited,” he says. “That’s where we’re seeing success. We can make it performant; we can make it smaller. But number one is getting the appropriate guardrails. That’s the biggest value rather than hooking agents together. It’s all about the business rules, logic, and compliance that put in up front.”