A dangerous tipping point? AI hacking claims divide cybersecurity experts

A dangerous tipping point? AI hacking claims divide cybersecurity experts

Is artificial intelligence a feared-about-face threat or a worrying watershed for artificial intelligence?

Cybersecurity experts have responded in a variety of ways to Anthropic’s recent announcement that it discovered the first artificial intelligence-led hacking campaign in the world.

Recommended Stories

list of 4 itemsend of list

While some observers have raised the alarm about the feared start of a dangerous turning point, others have reacted skeptically, claiming that the startup’s account leaves out important details and raises more questions than answers.

Anthropic claimed that its assistant Claude Code was manipulated to carry out 80 to 90% of a “large-scale” and “highly sophisticated” cyberattack, with only occasional human intervention required, in a report released on Friday.

The attack was intended to infiltrate government entities, financial institutions, tech companies, and chemical manufacturing companies, according to Anthropic, the creator of the well-known Claude chatbot.

The San Francisco-based business, which attributed the attack to Chinese state-sponsored hackers, did not specify how it had discovered the operation or who among the “roughly” 30 people claimed to have been targeted.

There was no denying that AI-assisted hacking posed a serious threat, according to Roman V Yampolskiy, an expert on AI and cybersecurity at the University of Louisville.

Yampolskiy told Al Jazeera that “Modern models can write and adapt exploit code, sift through large volumes of stolen data, and orchestrate tools more quickly and affordably than human teams.”

They “slow down the entry-level skill gap” and make actors with strong management skills more likely to be successful. A junior cyber-operations team is being put in the cloud, which is rentable per hour.

Yampolskiy predicted that AI would increase both the number and the severity of attacks.

Jaime Sevilla, director of Epoch AI, claimed that the report did not contain much new information, but that previous research suggested that AI-assisted attacks were both feasible and likely to become more prevalent.

Sevilla told Al Jazeera, “This is likely to hit medium-sized businesses and government agencies hardest.”

“Instead of being valuable enough targets for focused campaigns and frequently underfunded in cybersecurity, AI makes them profitable targets.” Many of these organizations should start using vulnerability-reward programs, using AI to identify and fix weaknesses internally, and I anticipate that they will do so.

Some analysts have refrained from making any claims that are made by many analysts in favor of more information from Anthropic.

When American Senator Chris Murphy warned that if regulation wasn’t made a priority, Meta AI CEO Yann LeCun criticized the lawmaker for being “played” by a business that wanted to be “taken over” by a company.

In a post on X, LeCun wrote, “They are scaring everyone with dubious studies so that open source models are regulated out of existence.”

Anthropic did not respond to a request for comment.

China “consistently and resolutely” opposed all types of cyberattacks, according to a representative for the Chinese embassy in Washington, DC.

According to Liu Pengyu, “We hope that relevant parties adopt a professional and responsible attitude and base their characterization of cyberattacks on factual allegations and speculation,” Liu Pengyu told Al Jazeera.

Anthropic had business incentives to highlight both the risks&nbsp of such attacks and its ability to counteract them, according to Toby Murray, a professor of computer security at the University of Melbourne.

Some people have questioned Anthropic’s claims, saying that the attackers could have managed to use less human supervision to accomplish extremely complex tasks, Murray said.

“Unfortunately, they don’t provide reliable information about the specific tasks that were carried out or the oversight that was provided.” Therefore, it’s challenging to interpret these claims in any way.

Given how effective some AI assistants are at coding tasks, Murray said he didn’t find the report particularly surprising.

He said, “I don’t see how hacks will be caused by AI-powered hacking.”

“But it might cause a change in the scale.” In the future, we should anticipate more AI-powered hacks, and that those hacks will have a greater impact on their effectiveness.

Analysts predict that AI will be crucial in bolstering defenses, despite the growing risks that cybersecurity will face.

A Harvard University research fellow with a focus on computer security and AI security, Fred Heiding, said he thinks AI will give cybersecurity experts a “significant advantage” over time.

“A shortage of human cyber-professionals is preventing many cyber-operations today. Heiding told Al Jazeera, “AI will help us overcome this bottleneck by enabling us to test all of our systems at scale.”

The biggest danger, according to Heiding, is that security professionals will have a window of opportunity to exploit the increasingly sophisticated AI they are using, despite the fact that Anthropic’s account is broadly credible but “overstated.”

He claimed that, “Unfortunately, the defensive community is likely to be too slow” to incorporate the new technology into automated security testing and patching solutions.

Source: Aljazeera

234Radio

234Radio is Africa's Premium Internet Radio that seeks to export Africa to the rest of the world.