Meta, the parent company of Instagram, is rolling out new safety features to protect teenagers and combat potential scammers on its platform amidst growing concerns over harmful content affecting young users.
In a recent statement, Meta announced the trial of message blurring technology on Instagram ‘s direct messages, aimed at safeguarding teenagers from malicious individuals and nudity. This protection feature will use on-device machine learning to analyze images for nudity before sending, automatically enabled for users under 18 and recommended for adults. Notably, the nudity protection works even in end-to-end encrypted chats, ensuring privacy alongside safety.
Additionally, Meta disclosed efforts to develop technology to identify accounts engaged in sextortion scams. The company plans to test pop-up messages warning users who may have interacted with such accounts, demonstrating a proactive stance against online exploitation.
Also Read : Meta Platforms Accused of Secretly Monitoring Users
These initiatives follow Meta’s January commitment to conceal sensitive content from teenage users on Facebook and Instagram, focusing on topics like suicide, self-harm, and eating disorders.
Meta’s actions respond to legal pressure, including a lawsuit from 33 U.S. states alleging misinformation risks and inquiries from the European Commission on child protection measures.
Stay informed about Meta’s ongoing efforts to enhance safety on Instagram and protect young users from harmful online experiences.