EG
The Express Gazette
Sunday, November 9, 2025

Report: Australia Can Enforce Under-16 Social Media Ban but Technologies Carry Significant Risks

UK Age Check Certification Scheme finds several technical options for age verification, but all have shortcomings around accuracy, privacy and circumvention as Australia prepares to enforce the world-first law in December

Technology & AI 2 months ago

A UK-based testing body concluded that Australia could technically implement its planned ban on social media accounts for children under 16, but every method examined carries substantial risks or practical shortcomings, a final report published Sunday found.

The federal government, which commissioned the Age Check Certification Scheme to assess options, says the new requirement is intended to limit the harmful impacts of social media on young people. Under laws due to come into effect in December, social media platforms operating in Australia must take “reasonable steps” to prevent users under 16 from creating accounts and to deactivate existing accounts belonging to that age group.

The independent assessment examined a range of techniques that platforms and regulators could use to verify age or otherwise restrict access. Those options included formal verification using official identity documents, various parental approval systems, and technologies that estimate age from facial structure, gestures or online behaviour. While each approach can reduce the likelihood that underage users gain access, the report said none provides a robust, risk-free solution.

Document-based verification was judged to be effective at establishing age in many cases but to present exclusionary effects and privacy risks. Many children do not possess government identity documents or are unable to provide them without an adult’s assistance. Requiring copies of passports, driver’s licences or national identity cards to open social accounts could therefore deny access to eligible teenagers and concentrate highly sensitive personal data in the hands of platforms or third-party verifiers.

Technologies that infer age from biometric markers such as facial features or from behavioural signals were found to have serious accuracy and fairness challenges. The report flagged concerns about algorithmic bias, the potential for higher error rates for people of particular ethnicities and ages, and the risk of false positives that would incorrectly block legitimate users. It also highlighted the privacy implications of collecting facial images or detailed behavioural traces, which can create new opportunities for misuse, surveillance or data breaches.

Parent-led verification models — in which an adult confirms a child’s age — were described as simpler and less intrusive in some respects, but also limited in effectiveness. The report noted that parental approval does not reliably prevent determined minors from creating accounts and raises questions about when parental consent is appropriate and how to verify the identity of the consenting adult. It also said parental controls and approvals could be circumvented by shared devices or by children using friends’ or relatives’ credentials.

Device-based checks such as SIM registration and telecom-based verification offer another possible route but are far from foolproof. Mobile numbers and devices can be bought, rented or spoofed, and device checks do not distinguish between users of different ages on the same hardware. The report underscored that measures relying on device or network signals can be evaded through virtual private networks, emulators or other technical workarounds.

Across the suite of technologies, the assessors emphasised trade-offs between effectiveness, inclusivity and privacy. Any system that is highly restrictive and accurate risks excluding large groups of legitimate teenage users or collecting sensitive personal information. Systems designed with stronger privacy protections may be easier to circumvent or may produce higher rates of mistaken age attribution.

The report also drew attention to broader jurisdictional and operational challenges. Social media platforms often operate across borders, maintain global user databases, and use centralised systems that are not tailored to one national requirement. Enforcing Australia’s rules will therefore depend on cooperation from international providers and on the technical and legal mechanisms platforms employ to geofence services, verify users’ locations, and manage account status in different markets.

Privacy advocates and technology experts have previously warned about the concentration of sensitive data that could result from large-scale age verification schemes. The report echoed those concerns, warning that even well-intentioned verification systems can create new privacy and security liabilities if they centralise identity information or if verification is outsourced to commercial third parties without strict safeguards.

The federal government has argued that the requirement to take “reasonable steps” gives platforms flexibility to choose appropriate methods. The report’s authors said that flexibility is pragmatic but placed responsibility on policymakers to define what constitutes reasonable steps and to ensure that safeguards are in place to protect children’s data, preserve fairness, and provide transparent redress processes for users who are wrongly prevented from accessing services.

The assessment comes as the policy is attracting international attention. The Australian approach has been described by some officials as a global first for its explicit age threshold and mandated platform action, and other governments are watching closely for lessons about enforcement, effectiveness and unintended consequences.

Parents and child-safety campaigners largely welcomed the government’s objective of reducing children’s exposure to harmful content and online risks. At the same time, several technology and privacy experts have urged careful implementation. They argue that policy success depends not only on the chosen technical measures but also on strict limits for data retention, independent audits of verification tools, legal oversight of third-party certifiers, and accessible complaint and correction mechanisms for users.

The report underscored the importance of designing systems that minimise the collection and storage of personal data, that avoid creating permanent identity linkages wherever possible, and that are transparent about how age is assessed and how long verification data are retained. It also advised that regulators consider impact assessments, independent testing for algorithmic bias, and clear rules to govern the commercial use of verification data.

Australian officials have said the law aims to curtail the risks social media poses to children’s mental health and wellbeing. The report does not evaluate the policy’s underlying public-health rationale or measure potential benefits; it focuses on the technical feasibility and implications of enforcement. Its findings suggest that achieving the policy’s aims will require careful balancing of child protection goals with privacy rights, inclusivity, and operational realities.

The Age Check Certification Scheme’s final report provides a catalogue of options and trade-offs but stops short of endorsing a single approach. Instead, it frames the challenge as a set of choices that policymakers, regulators and platforms must weigh. The testing body recommended that any adopted solution be accompanied by strong legal protections, independent oversight, and mechanisms that allow affected users to appeal or correct verification outcomes.

With a December start date for enforcement, platforms and regulators are under a compressed timetable to develop and test procedures that meet Australia’s requirement for reasonable steps. The report’s publication is likely to inform both government decision-making and platform preparations, while also shaping public debate about how to protect children online without creating new risks through intrusive or exclusionary verification systems.

The Australian government commissioned the study as part of its preparations to implement the new rules. The report’s release marks a significant contribution to the policy discussion by identifying the technological choices available and detailing their limitations. It leaves open how Australian lawmakers and regulators will define the legal and technical boundaries of reasonable steps, how they will secure cooperation from international platforms, and how they will safeguard the privacy and rights of young people affected by the measures.