Meta AI App Shares Private User Conversations Publicly Without Consent
Meta’s new stand-alone AI application raises significant privacy concerns as users unknowingly share their private conversations publicly. The app allows users to share text chats, audio clips, and images generated during interactions with the AI. However, many users are unaware that hitting the share button publishes their content openly, exposing sensitive information to the public.
Examples of shared content include personal questions, sensitive legal details, and private addresses. Some users have posted queries about illegal activities, medical issues, and family legal matters with identifiable personal data included. Meta does not clearly inform users about the privacy settings or the public nature of their shared content, especially when linked to public Instagram accounts.
The situation highlights a serious data protection risk under the EU General Data Protection Regulation (GDPR). Meta, one of the world’s largest technology companies, has released an app with a feature that effectively turns private AI interactions into public posts without explicit user consent or sufficient transparency. This lack of clear communication about data sharing and privacy controls can lead to violations of GDPR principles such as transparency, purpose limitation, and data minimization.
With over 6.5 million downloads since its launch, the app’s design encourages sharing that may result in unintended exposure of personal data. This creates potential legal liabilities for Meta and risks harming users’ privacy rights. Companies must ensure that AI products comply with data protection regulations by providing clear user guidance and robust privacy safeguards to prevent unauthorized disclosure of personal information.