Instagram has announced a new feature that will alert parents if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This move comes amid growing pressure on governments to implement regulations similar to Australia’s ban on social media use for individuals under 16.
Starting next week, Instagram, which is owned by Meta Platforms Inc., will notify parents enrolled in its optional supervision setting in Canada, the United States, Britain, and Australia if their children attempt to access content related to suicide or self-harm. The platform stated that these alerts are an extension of their efforts to safeguard teenagers from potentially harmful content, emphasizing their strict policies against material that promotes or glamorizes self-harm.
Instagram’s current policy involves blocking such searches and guiding users towards support resources, reinforcing their commitment to ensuring a safe online environment for young individuals. Governments worldwide are increasingly focusing on safeguarding children from online harm, prompted by concerns like the AI chatbot Grok, which has been involved in generating inappropriate imagery without consent.
In response to these challenges, countries like Britain have been contemplating measures to protect children online following Australia’s lead last December. Additionally, nations such as Spain, Greece, and Slovenia have expressed interest in imposing restrictions to limit access to harmful online content.
Instagram has introduced “teen accounts” tailored for users under 16, requiring parental approval to modify settings. These accounts offer enhanced parental controls and privacy features, enabling parents to opt for additional monitoring while blocking teens from viewing sensitive content, including sexually suggestive or violent material.
