Tech leaders struggling to store AI data, never mind manage it research shows – Blocks and Files


Four -fifths of organizations were burned by employees using the AI ​​generation, with the leak of sensitive data almost as common as false or inaccurate results, revealed Komprice research.

And while companies run to manage their data, they also play catching up when it comes to storing and managing it, has revealed that the study on AI, data and corporate risks.

More than two -thirds said that infrastructure was an absolute priority when it comes to supporting AI initiatives, with 9% saying that it was the most important thing after cybersecurity.

More than a third party has identified the increase in storage capacity as their best storage investment in AI, with 37% identification of data management for AI – on the basis that AI was only useful when it incorporated the organization’s own data.

And a little less than a third party, said that the acquisition of “efficient storage to work with the GPUs” was their absolute priority. Overall, 46% said the three lanes were important priorities.

The simple fact of finding and moving the right unstructured data was a key challenge for 55% of companies, with a lack of visibility between data, and the absence of “easy ways to classify and segment data” also key concerns.

And a third of the respondents said they had “an internal disagreement on how to approach data management and governance for AI”.

Krishna Subramanian, co-founder of Komprisesaid companies were starting to investigate the tools to apply solid governance and compliance of the AI. The alternative was the leak of the company’s data and is “part of the LLM public”.

Let’s be tactics

“A major tactic is to classify sensitive data and use workflow automation to prevent its improper use with AI (73%). More than half (55%) also institute policies and form their workforce. ”

It would seem obvious, he said, “but it is encouraging to see that it happens.”

And some restrict the use of public Gen AI tools when they deploy their own internal tools.

Customers were trying to obtain better visibility on their data, so that they can manage it, said Subramanian, “and plans to mark classification and segment data as well as automation to help fuel good data at AI and monitor the results.”

He said that reality was that few businesses would form their own models on any large scale. This means that they need less GPU and storage accessible to the GPU. But that means that they should go to grip with unstructured data.

“On the contrary, you focus on obtaining good business data on pre-formulated models so that they can provide optimal commercial results. Data conservation for AI emerges as the next phase and basic investment in AI.”

“While the inference market begins to take off, the emphasis will be placed on companies to use AI effectively with their own data,” said Subramanian. “After all, the models have already been trained on all the data accessible to the public.”

Leave a Reply

Your email address will not be published. Required fields are marked *