Misuse of children’s images shows the… | Alannah & Madeline Foundation Skip to main content

The Alannah & Madeline Foundation is shocked and dismayed by the findings reported by international, non-governmental organisation Human Rights Watch that personal photographs of Australian children have been "scraped" from the world wide web and used to train popular AI tools without the knowledge or consent of the children or their parents. This shows the need for stronger legislation and regulation to protect children from exploitation online.

According to Human Rights Watch, the dataset used to train the AI tools contained links to identifiable images of Australian children. In some cases, the children's names and other personal details were available in accompanying information. Many of the photos had originally been posted in relatively private online environments; blaming the children or their caregivers for being “careless” would be grossly unfair.

There are various risks attached to using children’s images to train AI models, including risks that malicious individuals will use the models to create convincing images of children that are deceptive, offensive or abusive – for example, “deepfakes” of child sexual abuse imagery.

Even if the worst-case scenarios do not occur, using children’s images without the knowledge or consent of the children or their parents still represents a violation of children’s right to be protected from all forms of exploitation, including in digital environments.

We understand the organisation that manages the dataset has pledged to remove the children's images. But with generative AI technologies evolving rapidly and powerfully, this will not be the end of the bigger problem.

To prevent and address such violations in the future, the Foundation believes key changes are needed in legislation, regular and industry practice.

Firstly, we call for a Children’s Online Privacy Code, led by the Privacy Commissioner and applied to all services likely to be accessed by children. The code should determine what industry is allowed to do with children’s personal information, including their personal images, with a strong focus on upholding the rights of the child. The code should provide a clear direction for banning trading in children’s personal information.

We understand legislation will be introduced to parliament in August to reform the Privacy Act 1988. It is important that this legislation contains provision for a Children’s Online Privacy Code, which the Australian Government has pledged to support in the past.

Once a Code is created, the regulator will need strong resourcing to bring it to life. Changing the way industry handles children’s data is no small task – but with the rise of generative AI and other digital technologies, we cannot afford not to do it.

Meanwhile, Australia’s Online Safety Act is currently under review. We want to see the Act updated to move away from the old ‘co-regulatory’ approach, which led to the digital industry drafting their own safety codes to address illegal content such as child sexual abuse material. Leadership by eSafety, as an expert independent regulator, would deliver stronger protections for children. We also believe the Act should be updated to require industry to undertake child rights impact assessments and act on their findings.

New industry safety standards have just been registered by eSafety; amongst other things, they require digital platforms to invest in systems, tools and processes to address child sexual abuse material. We welcome this new requirement and trust industry will act on it meaningfully and effectively.

For too long, digital platforms have operated in ways which are unsuitable and even unsafe for children. This latest scandal should remind us that it is not enough to simply caution children and their parents to “be more careful” online. We need governments and industry to step up to create a digital world which upholds the rights of the child, instead of violating them.

Learn more about our Digital Rights Advocacy work here.