Spy agency turns to AI to tackle child abuse

UK intelligence agency GCHQ intends to use artificial intelligence to tackle issues from child sexual abuse to disinformation and human trafficking.

The UK's adversaries were already using the technology, it said.

The agency has published a paper, Ethics of AI: Pioneering a New National Security, saying the technology will be put at the heart of its operations.

And officials say it will help analysts spot patterns hidden inside large - and fast growing - amounts of data.

This could include:

*  trying to spot fake online messages used by other states spreading disinformation

*  mapping international networks engaged in human or drug trafficking

*  finding child sex abusers hiding their identities online

But it cannot predict human behaviour such as moving towards executing a terrorist attack.

The paper also sets out how GCHQ wants to support the AI sector, including setting up an industry-facing AI lab from its Manchester office, dedicated to prototyping projects and mentoring start-ups.

And GCHQ says it details how it will ensure it uses AI fairly and transparently, including:

*  an AI ethical code of practice

*  recruiting more diverse talent to help develop and govern its use

It is a sign the agency wants to avoid a repeat of criticism people were unaware how it used data, following whistleblower Edward Snowden's revelations.

"While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ," the agency's director, Jeremy Fleming, said.

"Today, we are setting out our plan and commitment to the ethical use of AI in our mission.

"I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI."

It comes as the government prepares to publish its Integrated Review into security, defence, development and foreign policy, in which technology, including AI, is expected to play a central role.

GCHQ outlined areas where adversaries were already using AI, including:

Adversaries are already using AI to create deepfakes and spread disinformation, GCHQ says


Foreign-state disinformation


A growing number of states are using AI to automate the production of false content to affect public debate, including "deepfake" video and audio, GCHQ warns.

The technology can individually target and personalise this content or spread it through chatbots or by interfering with social-media algorithms.

But it could also help GCHQ detect and fact-check it and identify "troll farms" and botnet accounts.

Child sex abuse


GCHQ says AI could:

*  help analyse evidence of grooming in chat rooms

*  track the disguised identities of offenders across multiple accounts

*  discover hidden people and illegal services on the dark web

*  help police officers infiltrate rings of offenders

*  filter content to prevent analysts from being unnecessarily exposed to disturbing imagery

Cyber-threats


Increasingly used to automate cyber-attacks, AI could also help identify malicious software and attackers as they continually develop new tactics to breach systems and steal data, GCHQ says.

Trafficking


Organised crime groups are becoming increasingly sophisticated in their use of technology, including encryption tools, the dark web and cryptocurrency, GCHQ says.

But AI could help:

*  mapping the international networks that enable trafficking - identifying individuals, accounts and transactions

*  "following the money" - analysing complex transactions, possibly revealing state sponsors or links to terrorist groups

*  bringing together different types of data - such as imagery and messaging - to track and predict where illegal cargos are being delivered

×