AI-generated child abuse global hit leads to dozens of arrests


At least 25 arrests have been made during a worldwide operation against child abuse images generated by artificial intelligence (AI), the European Union’s law enforcement organisation Europol has said.

The suspects were part of a criminal group whose members engaged in distributing fully AI-generated images of minors, according to the agency.

The operation is one of the first involving such child sexual abuse material (CSAM), Europol says. The lack of national legislation against these crimes made it “exceptionally challenging for investigators”, it added.

Arrests were made simultaneously on Wednesday 26 February during Operation Cumberland, led by Danish law enforcement, a press release said.

Authorities from at least 18 other countries have been involved and the operation is still continuing, with more arrests expected in the next few weeks, Europol said.

In addition to the arrests, so far 272 suspects have been identified, 33 house searches have been conducted and 173 electronic devices have been seized, according to the agency.

It also said the main suspect was a Danish national who was arrested in November 2024.

The statement said he “ran an online platform where he distributed the AI-generated material he produced”.

After making a “symbolic online payment”, users from around the world were able to get a password that allowed them to “access the platform and watch children being abused”.

The agency said online child sexual exploitation was one of the top priorities for the European Union’s law enforcement organisations, which were dealing with “an ever-growing volume of illegal content”.

Europol added that even in cases when the content was fully artificial and there was no real victim depicted, such as with Operation Cumberland, “AI-generated CSAM still contributes to the objectification and sexualisation of children”.

Europol’s executive director Catherine De Bolle said: “These artificially generated images are so easily created that they can be produced by individuals with criminal intent, even without substantial technical knowledge.”

She warned law enforcement would need to develop “new investigative methods and tools” to address the emerging challenges.

The Internet Watch Foundation (IWF) warns that more sexual abuse AI images of children are being produced and becoming more prevalent on the open web.

In research last year the charity found that over a one-month period, 3,512 AI child sexual abuse and exploitation images were discovered on one dark website. Compared with a month in the previous year, the number of the most severe category images (Category A) had risen by 10%.

Experts say AI child sexual abuse material can often look incredibly realistic, making it difficult to tell the real from the fake.


Leave a Comment