AI is becoming embedded in workflows, customer interactions, and enterprise decision-making across organizations. For boards and CEOs, that shift changes the conversation. The central question is no longer “How fast can we adopt AI?” but rather: “Can we govern it well enough to trust it at scale?”
Lexy Kassan, a senior technology leader responsible for enterprise AI strategy and governance at Databricks, brings deep experience operating at the intersection of data, AI, and business transformation. Her perspective is grounded not in theory, but in the realities of deploying generative and agentic systems inside large organizations—where tone, bias, monitoring, and accountability are not abstract risks, but operational requirements.
What follows is a conversation about why governance is a prerequisite for scaling high-quality enterprise AI.
AI Governance Leads to Trustworthy and Relevant Outputs
Catherine Brown: When executives say they are “doing AI governance,” what do they misunderstand about what it actually takes to scale AI into production?
Lexy Kassan: Typically, when I hear organizations approaching AI governance, it becomes an effort of, “We have a policy, we have a bunch of documented processes, and we have people who will approve things. As long as someone has checked the boxes and gone through the steps, then all is well.”
Realistically, governance impacts AI initiatives in both the development phase and ongoing success at scale. Strong governance leads to production AI that’s trusted and continues to improve and support the organization as designed. Scale does not come from getting approvals. Scale comes from operating AI on an ongoing basis. And that takes much more than just the data and AI team.
AI governance for trust at scale requires three things: communication, collaboration, and iteration. Communicate expectations both from the perspective of policy and risk mitigation and business intention and use. Collaborate between subject matter experts, technical experts, risk and security experts, and others to address concerns and achieve trusted systems. And iterate over time to keep AI systems relevant, trusted, and valuable.
Governance as the Enabler of AI Value
Catherine: At what point does AI governance stop being a compliance concern and become an operational requirement for the business?
Lexy: Governance has gone through a transformation in the last few years, particularly because of AI. Five or ten years ago, governance was often framed as risk mitigation and compliance. It was almost seen as the antithesis of innovation. Now governance is better understood in its truer form: as the enabler of value realization. Without governance, it’s very difficult to trust data or AI. And without trust, no one uses it. And use is where value comes from.
If no one trusts your AI, you’ve invested resources and gotten no value.
So governance is already a requirement if you want widespread adoption and to operate at scale.
Process Overload Slows Innovation
Catherine: What happens when organizations simply add AI into their existing review processes instead of redesigning the operating model?
Lexy: This is where putting undue amounts of process into the mix tends to happen.
Organizations say, “Instead of identifying a smoother path for AI, we’re just going to take whatever existing processes we have — privacy assessments, architecture reviews, security reviews — and add more to them.” You end up with disconnected committees that might meet once a month. You’re layering AI on top of slow governance rather than redesigning governance for AI.
If it takes six months to get something approved, and AI capabilities are evolving monthly, you’re structurally setting yourself up to fall behind. Governance shouldn’t mean more overhead. It should mean identifying a paved path — an architecture and framework that already mitigates risk so you’re not starting from scratch every time.
From Insight to Action Changes the Risk Profile
Catherine: How does the governance conversation change when AI systems move from generating insights to taking actions through agents and applications?
Lexy: When we think about putting AI into a process, we often think about a continuum from control to trust. On one end, you have fully human-controlled processes. On the other end, you have fully automated, agentic systems. When AI moves from generating insight to taking action, the stakes change. You give up more control and therefore must be able to place more trust in the system.
To achieve the levels of trust necessary for agentic action, the majority of the responsibility for AI governance has to shift towards business subject matter experts. Having a staged approach for testing, feedback, guardrail development, and evaluation helps to build confidence that the agents will act appropriately the vast majority of the time. And this responsibility continues in production where additional feedback and prompt engineering keeps systems on track.
That covers the content and action side – but what about the technical part? That’s where system fallback mechanisms, resilience, and robustness become critical. What happens if the AI is down? What happens if you need to retrain a model or refactor a chain? Governance includes planning for those scenarios. Where does it fall back to? Who does it fall back to? What does that look like?
Accountability Before Production
Catherine: What decisions do leadership teams need to make upfront about accountability, escalation paths, and human oversight before AI reaches production?
Lexy: Increasingly, we see organizations thinking about agents almost like employees. There are companies putting agents into workforce management tools, assigning them to managers, and holding managers accountable for their performance. You can apply performance management thinking to agents just as you would to a human employee. How well is it performing? Is it staying within bounds? Is it producing the outcomes it was designed for? It’s easier in some ways to correct agents — you can change instructions or retrain models — but it’s also different. Agents don’t have the same motivations as humans.
Leadership teams need to decide how performance will be measured, how trust will be evaluated, and what it takes to pull something out of production — and what it takes to reinstate it. Trust is easy to lose and much harder to rebuild. That applies to AI just as it does to people.
Scaling Responsibly Without Slowing Down
Catherine: Across the organizations you work with, what patterns distinguish teams that scale AI responsibly while still moving quickly?
Lexy: The first is the paved path I talked about earlier. They get to a point where they don’t have to debate the technology every time. They have a governed architecture with traceability, auditability, and accountability built in. That allows them to move quickly because the guardrails are already there.
The second is bringing business subject matter experts directly into the process. Where scaling happens fastest is when you don’t have constant back-and-forth between business and technology teams translating requirements. The business brings context — what good looks like, what’s valid, what’s not valid.
Governance is no longer just about the technologists. It’s about business and technology coming together under a shared framework.
Trust Must Be Designed and Measured
Catherine: How should executives think about trust — as something to be designed, measured, and managed — both internally and with customers?
Lexy: Trust is difficult to measure directly. So we rely on proxies. We measure data quality, system performance, adoption, and usage. We evaluate whether the system remains within defined bounds and produces acceptable outcomes.
You can think about it like performance management for a person. How much are others relying on them? How productive are they? How consistently do they meet expectations?
Trust itself may be hard to quantify, but performance, consistency, and adherence to standards are measurable. Over time, those measurements help establish trust.
Governance Sticks When Feedback Loops Exist
Catherine: If a CEO asked you for one concrete change to make in the next 90 days to ensure AI governance actually sticks, what would you recommend?
Lexy: Make sure there is feedback — whether that’s in usage or in understanding why something isn’t being used. If people are interacting with AI, are they providing feedback on the quality of results? Are they evaluating outcomes? And if no one is interacting with it directly, then we still need to evaluate those outcomes. Who is part of that review cycle?
Governance sticks when feedback creates meaningful change. When people see that their input improves the system and improves their own way of working, they engage with it.
And ultimately, make sure you’re prioritizing for value. Build what is worth building. Then establish that paved path so it’s easier to say yes to the next valuable AI initiative.
Governance Is the Condition for Scale
AI governance is often framed as a control mechanism. In practice, it is an operational discipline. Scaling AI is not about adding more review boards or more documentation. It is about embedding guardrails into architecture, establishing feedback loops, and designing systems that can be trusted over time.
For leadership teams, the takeaway is straightforward: governance is not what slows AI down, but poorly designed governance does. When governance is built into the platform, aligned with business ownership, and reinforced through measurement and feedback, it becomes the condition that allows AI to scale responsibly — and sustainably.
Explore the Databricks report, Delivering a Secure Data and AI Strategy, to see how leading enterprises are embedding governance, security, and trust directly into their AI operating models.