
Social Media Ban for Minors Under 15 and Digital Governance: Digital Protection or Digital Paternalism?
–Digital protection or digital paternalism?

By VASILIS ZOGRAFOS
CEO of Vision Labs R&D Team PhD Candidate in Computer and Data Science BSc (Hons).CS, MBA.IB, MSc.DS, PhD.C.
[email protected]
The debate over the imminent ban on social media use for children under the age of 15 is resurfacing with intensity in both European and national public discourse. The argument for the protection of minors appears strong and reasonable. Digital violence, addiction, the commercialization of data, and exposure to inappropriate content constitute real dangers. However, the critical question is not only whether restrictions should exist, but how a digital state perceives its role toward the new generation.
The logic of a horizontal ban exudes a classic administrative paternalism. The state undertakes the setting of an age limit, shifting the responsibility from the platforms themselves and their business models onto minor users and their families. The result is a simplistic solution to a complex problem. Digital reality does not operate within absolute boundaries, nor does technology obey linear prohibitions.
The digital state is called to respond at three levels: regulatory, technological, and pedagogical. The ban touches only upon the first level, and even then, with a tool reminiscent of analog administration. The essence of the matter, however, is found in the architecture of the platforms, the content promotion algorithms, and data collection.
At the European Union level, the Digital Services Act (DSA) has already established obligations for transparency, risk assessment, and the restriction of targeted advertising to minors. The problem is not a lack of a regulatory framework, but its implementation and enforcement. A mature digital state does not settle for mere reproduction of European rules. It must invest in national oversight structures, algorithm audit mechanisms, and expertise that allows for a substantive assessment of compliance.
The technological aspect is even more complex. Enforcing age limits presupposes reliable age verification. The solution of simple birth date declaration has proven insufficient. On the other hand, strict identification raises issues of privacy and proportionality. Here, the responsibility of the digital state emerges: to develop secure, minimized identification systems that protect minors’ data without creating new surveillance risks.
The pedagogical level often remains outside the discussion. Digital literacy cannot be replaced by bans. Strengthening critical thinking, understanding attention-manipulation mechanisms, and education on digital time management constitute long-term but essential interventions. A state that invests only in legal barriers and not in knowledge remains institutionally deficient.
Platforms do not function as neutral communication conduits but as machines for maximizing “stay time,” based on recommendation systems that promote addictive, emotionally charged, and often superficial or mindless stimuli. A ban isolates the user without touching the business model that generates the problem.
A mature digital state should pose the question differently: Instead of excluding youth from the digital public sphere, why not impose obligations for differentiated algorithmic functioning for minor users? Technology allows for age categorization and dynamic content regulation. The same knowledge used to personalize advertisements can be utilized to limit addictive patterns and promote strictly educational content.
Algorithms are designed with the clear goal of increasing interaction and profitability. When the state limits itself to age filters, it avoids intervening in the core of the algorithmic economy. A political decision is thus transformed into a managerial arrangement.
Critical thinking is the fundamental protection mechanism for youth in the digital space. Strengthening digital literacy, understanding how algorithms function, and training in the deconstruction of manipulation techniques constitute a long-term strategy. In contrast, a ban creates the illusion of control without empowering the user.
International experience shows that differentiating algorithmic settings by age category is technically feasible. In some countries, such as China, platforms are required to implement strict time limits and promote educational content for minor users, emphasizing cognitive fields rather than endless entertainment material. Although China’s political system differs radically from the European one, the technological principle remains clear: algorithms can be reoriented.
Greece, as a member-state of the European Union, possesses the regulatory tools to demand transparency, differentiation, and accountability in content recommendations to minors. The objective is not for youth to withdraw from the digital space, but for the space itself to be transformed into an environment that is less addictive and more educational. The essence of digital governance lies not in prohibiting access, but in reshaping the mechanisms of influence.
A digital state that limits itself to age-based prohibitions indirectly admits its inability to regulate the algorithmic power of platforms. A state that invests in critical thinking and the regulation of recommendation systems chooses the difficult but substantive path. The protection of minors is not achieved through isolation, but through institutional intervention in the core of the digital economy.
TO PARON