Discussion on the ban on digital media

Child safety and technological solutionism: Discussing social media restrictions in the digital age

Australia proposes a complete ban on social media for children under sixteen. Privacy threats, practical failures, and neglect of underlying psychosocial factors. Rights-based protections through the Digital Personal Data Protection Act, 2023. Parental consent, child monitoring, and restrictions on targeted advertising. Balance the law, digital literacy, family engagement, and platform accountability.

In the digital age, the lives of children and adolescents are no longer limited to the physical world. Social media, online platforms, and virtual communities have become a crucial part of their learning, expression, identity, and social connections. At this time, the question of how to ensure the safety of children on these platforms naturally arises. Australia’s recent proposal to completely ban social media for children under the age of sixteen stems from this concern. However, this move has also faced criticism as an overly simplistic, technology-based solution to a complex psychosocial problem, a move that has been termed “technological solutionism.”

Technological solutionism implies the assumption that social, cultural, and psychological problems can be solved solely through technological rules, restrictions, or tools. This approach appears particularly limited in the context of children, as children’s mental health, self-esteem, social comparison, online harassment, loneliness, and behavioral changes are not resolved simply by blocking access to a platform. These problems are deeply intertwined with family, school, peer groups, social environments, and broader cultural trends. Australia’s proposal appears focused on outright prohibition rather than addressing these complex causes.

According to Australia’s proposed law, social media use would be restricted for children under the age of sixteen and mandatory age verification would be implemented. While this measure appears to be drastic and decisive for the safety of children, its practical implications are questionable. Age verification could require the collection of extensive personal information, posing a threat to the privacy of children and their parents. Furthermore, technological measures can be easily circumvented through virtual private networks, fake identities, or new, less regulated platforms. This not only undermines the effectiveness of the law but could also expose children to more risky digital spaces.

Another serious aspect of a complete ban is that it could deprive children of the positive opportunities that social media offers. Today, many children access educational content, participate in creative activities, learn about social issues, and connect with mental health support groups through these platforms. For children, especially those in rural, disadvantaged, or socially marginalized communities, digital platforms are often their only means of expression and collaboration. A complete ban could further deepen social inequalities. This approach also suggests that the root of the problem lies with children or technology, when in reality, the problem lies with the social framework within which children are digitally socialized.

India has adopted a different and relatively balanced approach in this regard. The Digital Personal Data Protection Act, 2023, proposes a rights-based and data-centric approach to protecting children, rather than outright prohibition. The Act recognizes that isolating children from the digital world is neither possible nor desirable. Instead, it is essential to hold digital platforms accountable and protect children’s rights. The law mandates verified parental consent for the processing of children’s personal data, strengthening the role of the family.

An important aspect of the Digital Personal Data Protection Act is that it explicitly prohibits behavioral tracking, personality-based categorization, and targeted advertising of children. This seeks to protect children from exploitation as consumers. Furthermore, the Act provides children with rights such as access to, correction of, deletion of data, and filing of complaints. This approach is an important step towards viewing children as citizens with rights, rather than merely objects of protection.

Additionally, India has established an independent Data Protection Board, which can impose penalties for violations. This ensures accountability for platforms and data processing entities, without requiring children to be excluded from digital platforms. This model also demonstrates that legislation, along with social awareness, digital literacy, online safety education in schools, and the expansion of mental health services, are essential. Ensuring child safety cannot be achieved through legislation alone; it requires a holistic societal effort.

Comparing the approaches of Australia and India reveals that while on the one hand there is a policy based on control and prohibition, on the other, there is a framework based on balance, participation, and empowerment. Australia’s proposal offers the appearance of a quick solution, but in the long run, it may face practical, constitutional, and social challenges. In contrast, India’s model is slow and complex, but it better harmonizes children’s autonomy, privacy, and development.

Ultimately, children’s online safety isn’t just about what to keep them away from, but also how to empower them. Child safety in the digital age doesn’t mean isolating children from technology, but rather empowering them to be safe, aware, and responsible with technology. Simple technological solutions cannot replace complex human challenges. In this context, India’s digital personal data protection framework indicates that building a safe and inclusive digital future for children is possible only through balanced regulation, parental engagement, platform accountability, and social support.