From principles to practice
As artificial intelligence systems become more agentic and capable of multi-step reasoning and autonomous action, the issue of trust moves from theory to practice. These challenges are the focus of the Trust in AI Alliance, a new forum hosted by Thomson Reuters Labs that brings together leading AI researchers and engineers from industry and academia to define common approaches to building trustworthy agentic AI systems.
Learn more about the Trust in AI Alliance
This article introduces the first technical topic the Alliance will explore, what it actually takes to build trust in AI systems that operate in high-stakes business environments.
“Our customers are not using AI for experimentation,” said Joel Hron, chief technology officer at Thomson Reuters. “They use it to make decisions that they must be able to explain, defend and support. As AI systems become more agent-like, trust stops being a political issue and becomes an engineering requirement.”
From trust principles to system design
Thomson Reuters has a long history of operating under the principles of trust which include the core values of independence, integrity and freedom from bias. These values now extend to our AI and Data Ethics Principles, which emphasize fairness, transparency, trustworthiness, and meaningful human involvement. Together, they form the basis of how we design and deploy AI in our products used in high-stakes business environments.
Together, these principles shape how agentic AI systems are constrained, audited, and integrated into business workflows, not as black boxes, but as accountable collaborators.
When professionals ask: can I trust this?
Consider the example of a tax professional using an AI platform to research a complex compliance question. The system can leverage AI to reason through laws, regulatory guidelines and documents requiring some legal interpretation, thereby significantly speeding up the research process. But speed isn’t the hard part. The most difficult question is whether the professional can:
• Understand which sources informed the answer
• Verify that these sources are authoritative and unchanged
• Track how the system reached its conclusion
• Know where and when human judgment should intervene
This tension between autonomy and responsibility is exactly what defines trust in agentic AI.
Three technical challenges that define trust
As part of its initial work, the Trust in AI Alliance will focus on three fundamental challenges that determine whether agentic systems are worthy of professional trust.
- Context integrity: Can the system preserve all critical decision criteria when AI models compress or segment information?
- Immutable provenance: How can we ensure that the cited source text remains unchanged and verifiable?
- Security against conflicting prompts: How to protect workflows from malicious input without compromising usability?
Why it matters
Agentic AI promises significant gains in productivity and visibility. But greater autonomy also increases risks. Without clear safeguards, transparency and accountability, trust erodes and adoption suffers. By creating the Trust in AI Alliance, Thomson Reuters Labs is creating a space for candid, technical discussion about how trust can be designed, not assumed, as AI systems evolve. The goal is not to slow down innovation, but to ensure that capability and accountability move forward together.
As the work of the Alliance begins, one principle anchors every conversation. Trust should be built into agentic systems from the start and never added as an afterthought.
Looking to the future
The upcoming Trust in AI Alliance workshop will explore these challenges and opportunities. Our goal is to ensure that tomorrow’s agentic systems earn trust every step of the way.
Join us to frame the conversation and set the stage for collaborative solutions. Learn more about our commitments:
Thomson Reuters AI Principles | Thomson Reuters Trust Principles