Meta has expanded its artificial intelligence systems to detect underage users across Facebook and Instagram, as global scrutiny intensifies over child safety and online platform accountability.
The update, introduced by Meta Platforms, is designed to identify accounts belonging to users under the age of 18 through a combination of visual analysis and behavioural signals. The company says the move is part of a broader effort to strengthen protections for younger users on its social media platforms.
According to Meta, the system reviews photos and videos to estimate age by analysing physical features such as facial structure, bone proportion, and general appearance. It also examines user behaviour, including birthday posts, school-related content, and engagement patterns that may indicate a minor’s age.
The company has clarified that this is not facial recognition technology. Instead, it relies on probabilistic AI models that assess age-related cues without directly identifying individuals through biometric matching.
Accounts flagged as belonging to users under 13 are automatically deactivated. In such cases, users must verify their age within a limited timeframe to regain access. Verification can be completed through a government-issued identity document or through facial age estimation tools developed by Yoti.
Also read: Oakley Meta Glasses vs Ray-Ban Meta: 2025 comparison & features
The same verification process applies to users who attempt to bypass platform restrictions by falsely updating their age from under 18 to adult status. Meta says these safeguards are intended to prevent circumvention of its teen safety policies.
The rollout of the visual scanning system is currently active on Instagram in the United States, United Kingdom, Canada, and Australia. The company has now begun expanding the system to Brazil and 27 European Union countries.
Facebook users in the United States are also receiving the updated protections, while deployment in the UK and EU is expected to continue in the coming months.
Meta has said it plans to extend Instagram’s full global coverage of these protections by the end of 2026, although timelines may vary depending on regulatory approvals in different jurisdictions.
While visual scanning remains limited to selected regions, Meta confirmed that other components of its AI-based age detection system are already operating globally. These include profile-based behavioural analysis, which identifies age-related patterns without requiring image scanning.
The company began deploying AI-driven age detection tools in 2024, claiming that the system has already helped place hundreds of millions of users into restricted “Teen Accounts”.
These accounts include users who allegedly attempted to bypass age restrictions, although Meta has not provided independently verified figures.
Official data cited in legal proceedings in the United States shows growing regulatory pressure on social media companies to improve child protection mechanisms.
Also read: Meta layoffs 2025: Company cuts 4,000 jobs amid restructuring
In one recent case in New Mexico, a jury found Meta liable for failing to adequately protect minors from predatory behaviour on its platforms, ordering $375 million in damages under the state’s Unfair Practices Act.
Analysis suggests that Meta’s expanded AI rollout reflects both regulatory pressure and reputational risk, as governments tighten scrutiny over how large platforms manage underage users.
Experts say automated detection systems may improve enforcement at scale, but also raise questions about accuracy, privacy, and potential false positives.
“Age estimation through AI can improve safety enforcement, but it must be carefully balanced with privacy safeguards and appeal mechanisms,” said a digital policy researcher familiar with platform regulation.
Meta has not confirmed whether the latest expansion is directly linked to the court ruling, but the timing indicates increasing urgency to demonstrate stronger child safety enforcement across its global platforms.