Instagram Tests AI to Detect Underage Users
Meta Platforms, the parent company of Instagram, has begun testing artificial intelligence (AI) to verify the ages of users, particularly focusing on identifying teenagers who may have provided false birthdates. This AI technology has been used by Meta for some time but will now actively search for accounts suspected to belong to minors, even if the age entered during registration is inaccurate. When such cases are detected, the accounts are reclassified as teen accounts, which come with stricter privacy and safety settings.
Teen accounts on Instagram are private by default, limiting who can view their content and restricting direct messages to only those they follow or are connected with. Additionally, these accounts will have reduced exposure to sensitive content, such as videos showing violence or promoting cosmetic procedures. Instagram will also notify teens if they spend more than 60 minutes on the app and will activate a “sleep mode” that disables notifications and sends automatic replies to messages between 10 p.m. and 7 a.m.
Meta’s AI system assesses various indicators to estimate a user’s age, including the type of content interacted with, profile details, and the account creation date. These measures come amid growing concerns about the impact of social media on young users’ mental health and increasing legislative efforts to enforce age verification. Some proposed laws have faced legal challenges, but social media companies continue to support age checks, particularly by app stores.
Instagram will also provide parents with notifications containing guidance on discussing the importance of accurate age reporting with their teens. Meta and other platforms emphasize the responsibility of app stores to enforce age verification, addressing criticism that platforms have not done enough to protect children or prevent those under 13 from accessing their services.
Source: Instagram tries using AI to determine if teens are pretending to be adults