Employees put sensitive data in public AI tools, and many organizations do not have controls to stop it. A new Kiteworks report finds that most companies lack basic guarantees to manage this data.
Safety control maturity pyramid (source: kiteworks)
Organizations lack guarantees of employees of employees
Only 17% of companies have a technology in place to block or scan downloads to public AI tools. The 83% others depend on training sessions, warnings by e-mail or directives. Some have no policy.
Employees share customer files, financial results and even identification information with Chatbots or Co -Pilotes in AI, often from devices that security teams cannot monitor. Once this data is part of an AI system, they cannot be removed. He can live in training models for years, accessible in a way that the organization cannot foresee.
The problem is aggravated by excessive confidence. A third of leaders believe that their business follows any use of AI, but only 9% really have workers’ governance systems. This gap between perception and reality leaves blind organizations to the amount of information that employees expose.
The problem of conformity
Regulators from around the world quickly move on AI monitoring. In 2024, American agencies published 59 new regulations on AI, more than double the previous year. However, only 12% of companies lists violations of compliance as a major concern in terms of AI.
Daily reality suggests a much greater risk. The GDPR requires recordings of all treatment activities, but organizations cannot follow what employees download on chatbots. HIPAA requires audit trails for access to patient information, but the use of shadow AI makes it impossible. Finance and public enterprises are faced with the same problem with SOX and related controls.
In practice, most companies cannot answer basic questions such as AI tools hold customer data or how to delete it if a regulator requires. Without visibility, each employee invites a chatbot could become a failure of conformity.
Why is it important for cisos
For CISOs, the results indicate two priorities. The first is technical control. The blocking of sensitive data downloads and digitization content before reaching the IA platforms must be treated as a reference base. Training employees help, but the figures show that he cannot bear alone.
The second is compliance. The regulators are already expecting the governance of AI and emit penalties. CISOs must show that their organizations can see and control how data move in AI systems.
“Whether it is Middle East organizations with a detection of zero 24 hours, European companies with as little as 12% of EU data preparation, or 35% APAC which cannot assess the risks of AI, the deep cause is always the same: organizations cannot protect what they cannot see” Patrick SpencerVice-president of corporate marketing and research at Kiteworks.