Giving AI ‘hands’ in your SaaS stack – cio.com
I love AI copilots. I also don’t trust them with write access.
That’s not a philosophical stance. It’s an operator scar. In high-growth startups, helpful automation has a habit of turning into a 2 a.m. incident when it gets too much privilege and not enough supervision. Now we’re wiring large language models to the same systems that run lead-to-cash, billing, provisioning and customer support.
I’ve spent the last 15 years building the commerce engine for some of Silicon Valley’s fastest-growing companies. From the early, scrappy days at Eventbrite to architecting the lead-to-cash systems that supported Slack’s IPO and now leading enterprise systems at Gusto, I’ve lived through every phase of the business technology maturity curve.
For most of that time, my job was to build deterministic rails for deterministic trains. If a sales rep updated a contract in Salesforce, we built rigid, unforgiving integrations to ensure that data flowed perfectly into NetSuite and Snowflake. It was binary: it worked or it failed.
But we are now at an inflection point that scares me as much as it excites me. We are moving from the era of chat with your data where AI is a passive oracle to the era of work with your data. We are giving AI agents hands.
We are asking these probabilistic models to not just summarize a deal, but to update the record, provision the license and email the customer. We are connecting stochastic reasoning engines to mission-critical systems of record. As someone who has spent sleepless nights worrying about data integrity during financial audits, I can tell you that the potential for Excessive Agency — an OWASP Top 10 vulnerability where an agent does more than you intended — is the single biggest risk keeping IT leaders up at night.
I’m seeing a lot of leaders paralyzed by this risk, while others are recklessly handing out API keys. There is a middle ground. Drawing from my experience stabilizing complex stacks at Gusto and OneTrust, I want to lay out a pragmatic architecture for safe agency — how to let AI take the wheel without driving your GTM stack off a cliff.
In the early days of a startup, as I vividly recall from my time at Eventbrite, velocity is everything. You embed technical teams directly into sales ops or CX just to keep the lights on. In that environment, if you were deploying an AI agent today, the temptation would be to create a single Salesforce integration user with system administrator privileges and let the agent run wild.
I call this the God-mode anti-pattern and it is catastrophic.
If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations.
When I joined Gusto to lead business technology, one of my priorities was data trust. You cannot have trust if your non-human actors have unfettered access. We moved away from the wild west of shared credentials toward a model of rigorous identity governance.
For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees.
At Gusto, we didn’t just wire systems together; we used middleware like Workato and MuleSoft to create a commerce engine that sanitized data flow. For AI agents, you need a similar architectural buffer. You should never connect an LLM directly to your raw system APIs.
Instead, you need a Tool Gateway.
Think of this as an air traffic controller. The agent doesn’t see your complex Salesforce schema or your NetSuite SOAP API. It sees a simplified, virtualized set of tools that you define.
The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow.
When I was architecting systems at Slack, we constantly dealt with the concept of user context. If a sales rep in London searches for a contract, they should only see UK contracts.
AI agents often break this because they use a service account that sees everything.
To fix this, we need to lean on the OAuth 2.0 on-behalf-of (OBO) flow. When a user asks an AI agent to “update this deal,” the agent shouldn’t act as the AI; it should exchange the user’s token to act as that specific user.
This means the underlying platform (Salesforce, Workday, etc.) enforces its existing permission rules. If the user doesn’t have permission to view executive compensation, the agent acting on their behalf won’t either. This simple architectural decision saves you from having to rebuild your entire authorization model inside the AI layer.
One of the cultural shifts I drove at Gusto was improving devex (developer experience) and stabilizing our release pipelines with CI/CD tools like Gearset. We treated infrastructure changes with extreme caution.
We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship.
Every state-changing tool (POST, PUT, DELETE) exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change (e.g., “Status will change from Active to Churned”).
This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. This prevents the nightmare scenario we saw with the recent failure where an AI recursively deleted a database because it lacked context awareness.
Finally, we have to accept that agents are probabilistic. They will hallucinate. They will retry due to network blips. They will get confused.
In the distributed systems we built at Slack and Ethos, we relied on two patterns that are non-negotiable for agentic AI:
Leadership isn’t about being perfect; it’s about being present and navigating challenges with resilience. The same applies to our technology strategy. We cannot wait for AI to be perfect before we use it.
At Gusto, by stabilizing our platform and putting these guardrails in place, we were able to ship AI-assisted automations that classify and route incoming tickets, removing manual drudgery for our CX teams and accelerating handling times. We didn’t do it by being reckless; we did it by building a trust architecture first.
As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Pranav Lal is an enterprise systems and GTM technology leader with 15+ years building the “commerce engine” behind high-growth SaaS companies. He has led architecture and operations for Salesforce-centric lead-to-cash, CPQ, billing and invoicing integrations, plus secure data pipelines that tie sales, marketing, customer success and finance together.
Across Eventbrite, Slack, Ethos and OneTrust, he helped scale systems through rapid growth and public-company readiness, with a bias for reliability, auditability and pragmatic governance. He currently leads GTM Systems at Gusto, partnering with RevOps, CX, finance, security and data to align the systems roadmap with commercial outcomes.
Sponsored Links
source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

