The guidelines call for risk-calibrated monitoring of use cases, from note-taking and administrative tools to higher-stakes decision engines.
The Financial Services Institute urges policymakers to avoid making rash rules on artificial intelligence and instead rely on existing investor protection regimes, while calibrating oversight based on the risk of specific use cases.
In a new white paper released Wednesday, the group argues that best interest regulation and fiduciary standards already cover many conflicts and oversight obligations, with new AI-specific mandates justified only when the technology introduces new harms or materially alters existing risks.
The institute also views investor education and literacy as a critical compliance tool, coupled with transparent disclosures about how AI figures into advice and operations.
Low-risk applications – such as meeting transcription, administrative automation and internal research aids – should be subject to lighter requirements, the FSI white paper said, while higher risk tools that help make or execute investment decisions should have stricter controls, documentation and periodic testing with clear human accountability.
“As we continue to navigate this new era of AI, it is critical that our industry has clear and practical policies and practices in place to adopt these tools responsibly and effectively,” Dale Brown, president and CEO of the Financial Services Institute, said in a statement.
Brown said AI can “streamline processes and improve the customer experience,” adding that it requires thoughtful implementation and collaboration across the industry.
FSI’s latest call against redundant regulations comes just after FINRA emphasized generative AI and cyber fraud in its supervisory priorities for 2026. In a newly added section to its annual report, the brokerage industry self-regulatory organization explains how firms are testing big-language modeling tools to summarize documents and surface information contained in policies and client records.
Underscoring the importance of protecting investors and maintaining the integrity of capital markets, FINRA urged firms to maintain governance systems that test for accuracy and bias, record prompts and results, and also provide supervision, communications, recordkeeping – a facet of regulation that has long been in need of modernization, according to SIFMA – and fair use rules still apply when AI is involved.
FINRA also flagged AI agents that plan and execute tasks across systems, highlighting the potential for overreach, audit challenges, and mismanagement of sensitive data if firms allow models to act without adequate guardrails.
In this context, the handbook proposed by the FSI focuses on practical adoption steps for independent companies and RIAs. It recommends ranking projects using a nine-factor scoring matrix that takes into account business impact, risks, time to market, data readiness, technical feasibility and return on investment.
On the plumbing side, a four-step interoperability roadmap begins with secure data exchange via APIs and event-driven architecture, then moves to common domain and security models before enabling explainability, documentation, and trust measures between vendors.
Case studies presented in the paper tout early productivity gains from AI, including a 40% reduction in administrative time through automated meeting notes, a 25% increase in client coverage through automated portfolio reviews, and a 30% reduction in onboarding costs through standardized data taxonomies.
Bob Coppola, head of the AI Working Group and chief technology officer at Sanctuary Wealth, said the industry needs standards that support “innovation, transparency, safety and responsible use,” adding that the document “lays the foundation for scalable and consistent adoption of AI.”