Threats of increasing
According to an estimateThe generative AI (GENAI) could add the equivalent of 2.6 to 4.4 billions of dollars per year to the world economy. But as more and more organizations build the IA infrastructure and integrate technology into more critical processes, they could also be exposed to the risk of compromise, extortion and hoof of new ways.
We have highlighted it in the past, noting countless vulnerabilities and configuration errors in AI components such as Vector Stores, LLM accommodation platforms and other open source software. Among other things, organizations fear that threatened actors to steal for-profit training data, poison it to compromise a production and integrity of the LLMS, or to steal the models themselves.
In development AML.CS0028, we have revealed worrying trends:
- More than 8,000 registers of container exposed have been found online – including the number observed in 2023.
- 70% of these registers authorized thrust authorizations (writing), which means that attackers could inject malicious AI models.
- In these registers, 1,453 AI models have been identified, many of which in the exchange of open neural networks (ONNX), with vulnerabilities that could be exploited.
This strong growth reflects a broader trend: attackers are increasingly aiming at the underlying infrastructure supporting AI, not only the models of AA themselves.
Transform research into action
Fortunately, the world’s Trend Micro team of researchers to the threat turned to the future is still looking for tactics, technical and actor procedures for new threats to be taken. The more we know, the more we can help network defenders improve cyber resilience and improve their detection, protection and response efforts.
We submitted our last discovery to Mitre Atlas. The case study (AML.CS0028) is based on an attack of poisoning of real world against a model of AI hosted by a container in the cloud. As part of our research, we discovered more than 8,000 registers of exposed containers, 70% of which allowed writing access and 1,453 AI models which could also have been exploited.
This is the first Atlas case study to involve both cloud and container infrastructure in a sophisticated compromise of the supply chain. Only 31 studies have been accepted by the non -profit organization since 2020, we are therefore delighted to contribute positive to the security community with this submission.
Fight the right fight together
As might expect from the Star Star team of expert researchers, this case study stands out from the crowd both in the scope and the technical depth. We are convinced that its publication in Miter Atlas will help make the digital world safer, for several reasons:
- The study is coded in Atlas Yaml, allowing easy integration in the tools already aligned on the Attr & CK.
- It provides a reproducible scenario that defenders can simulate to improve threat detection and reply to incident response.
- It contributes to the initiative of Miter’s secure AI, encouraging others to share anonymized incidents and to help develop a collective understanding of AI threats.
At Trend Micro, we never forget that cybersecurity is a team sport. This is why our research efforts on threats and product development are used not only to protect our customers, but all technology users. It is the same philosophy that prompted us to create a competition specialized in Pwn2Oown later this year, which will help to surface new vulnerabilities in some of the most popular IA components in the world.
With Miter Atlas, we have another way to have a positive impact on the world cybersecurity landscape.