An artificial intelligence company (AI) said that technology has been “armed” by hackers to lead sophisticated cyber attacks.
Anthropic, who makes the Claude chatbot, says that his tools have been used by pirates “to commit a large -scale flight and an extortion of personal data”.
The firm said that its AI had been used to help write code that made cyberattacks, while in another case, North Korean crooks used Claude to fraudulently obtain distant jobs in the best American companies.
Anthropic says that he was able to disrupt the actors of the threat and reported the cases to the authorities as well as the improvement of his detection tools.
The use of AI to help write code has increased in popularity as technology becomes more capable and accessible.
Anthropic says he detected a case of so-called “atmosphere hacking”, where his AI was used to write code that could hack at least 17 different organizations, including government organizations.
He said that the pirates “used AI for what we think is an unprecedented diploma”.
They used Claude to “make tactical and strategic decisions, such as deciding which data to Exfiltrate and how to develop psychologically targeted extortion requests”.
He even suggested ransom amounts for the victims.
The agency AI – where technology works independently – has been presented as the next large stage in space.
But these examples show some of the risks that powerful tools represent potential victims of cybercrime.
The use of AI means “the time required to exploit cybersecurity vulnerabilities quickly shrinks,” said Alina Timofeeva, cybercrime and AI advisor.
“Detection and attenuation must evolve towards a proactive and preventive, not reactive after evil,” she said.
But it is not only cybercrime for which technology is used.
Anthropic said that “North Korean agents” have used its models to create false profiles to apply for distant jobs in American technological companies Fortune 500.
The use of remote jobs to access business systems has been known for some timeBut Anthropic says that the use of AI in the fraud program is “a fundamentally new phase for these employment scams”.
He said the AI ​​had been used to write job requests, and once the fraudsters used, it was used to help translate messages and write code.
Often, North Korean workers are “are sealed in the outside world, culturally and technically, which makes them more difficult for them to withdraw this subterfuge,” said Geoff White, co-presenter of the Podcast BBC Lazarus robbery.
“The agentic AI can help them jump over these obstacles, allowing them to be hired,” he said.
“Their new employer is then in violation of international sanctions by involuntarily paying a North Korean.”
But he said that the AI ​​”does not currently create completely crime” and “many ransomware intrusions still occur thanks to proven tips such as sending phishing emails and hunting for software vulnerabilities”.
“Organizations must understand that AI is a confidential information repository that requires protection, like any other form of storage system,” said Nivedita Mildy, main security consultant at Black Duck Duck.