Instagram has announced a new feature where it will alert parents if their teenage children repeatedly search for terms related to suicide or self-harm within a short time frame. This move comes as pressure mounts for governments to adopt similar measures to Australia’s ban on social media use for individuals under 16.
The social media platform, Instagram, which is owned by Meta Platforms Inc., revealed that it will send notifications to parents who have opted for the supervision setting when their kids attempt to access content related to suicide or self-harm. The alerts are scheduled to commence next week for users in Canada, the United States, Britain, and Australia.
In a statement, Instagram stated, “These alerts enhance our current efforts to safeguard teenagers from potentially harmful content on the platform. We maintain strict policies against any content that promotes or glorifies suicide or self-harm.” The platform’s existing protocol involves blocking such searches and directing users to support resources.
Governments globally are increasingly focusing on safeguarding children from online harm, particularly following concerns surrounding the AI chatbot Grok, which has been implicated in generating non-consensual sexualized images. In January, Britain announced deliberations on implementing restrictions to shield children online, following Australia’s initiative in December. Moreover, countries like Spain, Greece, and Slovenia have recently expressed intentions to restrict access as well.
Instagram’s introduction of “teen accounts” for individuals under 16 necessitates parental consent to modify settings. Parents can opt for an additional level of monitoring in agreement with their teenagers. These accounts also prevent young users from viewing “sensitive content,” including sexually suggestive or violent material.
