Lori Schafer in Forbes Technology Council: Should You Trust Your AI Agents? That Depends On Your Data
- Tori Hamilton

- 7 hours ago
- 4 min read

Read the article in Forbes Technology Council here.
Across industries, companies are doing far more than experimenting with agentic AI. These systems are beginning to influence how decisions get made, how work gets done and how businesses interact with consumers. It’s an exciting moment, and the promise is real.
But to say the quiet part out loud: Do you actually trust your AI agents?
You should, that is, if your data foundation and governance model are built to support them. Trust in agentic AI is not about confidence in algorithms alone. It’s about confidence in master data, processes and the security and controls built into an operating model. This is the foundation for getting AI agents on a path toward autonomous, trustworthy execution.
Master Data Isn’t Exciting, But It’s Necessary
IDC Global reports that by 2027, half of enterprises will be using AI agents; however, how many will be generating ROI and optimized business outcomes by then?
Agentic AI is only as effective as the data it can access and act upon, which is why master data, the core information about products, customers, suppliers, locations and more, is the key to AI-driven decision-making and agentic success.
Master data determines whether agents deliver value or introduce inconsistency. When data is working in silos, lost within fragmented, outdated or inconsistent systems, AI agents are forced to operate with only partially correct information. That leads to slower learning cycles, heavier manual oversight and more time spent validating outputs.
Over time, this fragmentation erodes confidence in the agent itself. Teams stop trusting recommendations, override decisions more frequently and question whether autonomy is worth the effort. In the worst cases, poor data quality can create security and compliance issues that undermine enterprise-wide adoption.
By contrast, master data that is automatically cleaned, enriched and centralized can dramatically improve the output of agentic AI. A solid system doesn’t just store data; it evaluates where the data is coming from, validates it, cleans and enriches it, structures it consistently and tags it appropriately across the organization. Whether it's marketing, financial, supply chain or operational data, the information is primed for AI agents.
The video player is currently playing an ad.
Governance Protects Data And Agentic AI Outputs
Once strong data is in place, governance becomes the next critical layer. Governance defines the boundaries within which AI agents operate. It ensures that agents align with business intent, comply with internal policies and external regulations and scale responsibly over time.
Effective governance isn’t about slowing innovation. It’s about creating clarity—identifying what agents can analyze, what actions they’re allowed to take, which decisions require human oversight and how outcomes are monitored and audited. Without governance, agentic AI can quickly drift from its intended purpose, making decisions that technically optimize a metric while undermining broader business objectives.
Governance also protects the outputs of agentic AI. When decision logic, data lineage and accountability are clearly defined, organizations can trace how and why an agent arrived at a recommendation or action. That transparency is essential for trust, particularly in regulated industries or high-impact use cases.
Safe Agents Build Trust
Even with enriched data and strong governance, there is still a human hurdle to overcome: trust in automated, autonomous decision-making. Enterprise organizations may have streamlined and standardized their data processes, but people still need to feel confident handing over responsibility to AI agents.
One of the most effective approaches is to bring teams into the process early. Marketers, merchandisers and supply chain leaders love their spreadsheets for a reason: they’re familiar and tangible. Show them how the data they’ve been managing translates into a unified dashboard, where they can see exactly how AI agents view, parse and interpret that information.
Next, walk users through setting guardrails. Let them define what agents should analyze, what actions they can take and where limits exist. Show them the levers they can pull to manage agent behavior, risk tolerance, and performance thresholds. This reinforces that AI agents are not black boxes; they are tools operating within clearly defined parameters.
Finally, guide teams to a point where they can primarily observe agent output, focusing on what matters most and stepping in only when necessary. This transition, from hands-on control to strategic oversight, is how organizations maintain accountability, ensuring agents are assigned to the right managers and aligned with business goals.
Trust Is Built On Data Discipline
Trust in agentic AI starts with data. The technology will continue to revolutionize workflows and open new opportunities to innovate and engage with consumers. But governed, maintained and enriched data is the defining factor in whether that transformation is sustainable.
Agentic AI isn’t a leap of faith; it’s a function of data discipline. The more disciplined an organization is about its master data and governance, the safer it will feel pushing AI agents toward higher-impact decisions. And the safer companies feel, the more boldly they can pursue new business outcomes.
In the end, trusting your AI agents isn’t about believing in the future. It’s about building the foundation to support it.


