2024 in Review: Child Online Safety
A look back at 2024 and the major changes and emerging trends in child online safety regulation around the world, reflecting a growing global consensus on the need for stronger digital protection frameworks, particularly for minors.
2024 marked a transformative year in child online safety regulation, with jurisdictions around the world implementing comprehensive frameworks to protect young users in digital spaces. Apart from the increased emphasis on the need for age assurance, other key themes emerged around content moderation standards, a greater awareness of dark patterns targeting children, and enhanced transparency requirements.
From dedicated online safety laws to gaming regulations, consumer protection frameworks, and data privacy regimes, these developments reflect a growing global consensus on the need for robust safeguards in digital environments frequented by young users. At the same time, these laws also highlight the challenge of balancing protection with platform autonomy and freedom of expression.
Content moderation
In 2024, regulators worldwide evolved their approaches to content moderation on online platforms. The overall legislative trend is towards requiring more active content moderation and filtering; however some jurisdictions like the US have pushed back on these laws, citing concerns regarding free expression.
- In February, the EU Digital Services Act (DSA) started applying to online platforms. The DSA, among other things, requires online platforms to put in place measures to counter the spreading of harmful content, and publishing information on their content moderation practices. On this basis, the EU Commission has already started a dozen investigations to understand more about human resources dedicated to content moderation, including their qualifications and linguistic expertise as well as accuracy and automated moderation systems relied on.
- In March, the Italian Competition and Market Authority (the "AGCM") imposed a fine of €10 million on three companies within the Bytedance group, namely the UK, Irish, and Italian TikTok entities ("TikTok"). This was triggered by the AGCM's investigation into the viral "French scar" challenge, which involved users teaching themselves how to obtain a red mark on their cheekbone mimicking a large scar. In relation to content moderation, The AGCM found that automated content moderation struggles with complex violations, making human review critical for less obvious cases. TikTok’s moderation team was recruited with generic criteria (e.g., awareness of social issues, internet laws, and shift flexibility), their training focused on overtly unlawful, violent, or sexual content, and their automated systems were less effective for non-English content.
- In July, it was announced that Singapore’s Infocomm Media Development Authority ****(IMDA) will introduce a new Code of Practice for App Distribution Services, requiring mobile app stores to put in place certain content moderation measures, and also to ensure that providers of apps with user-generated content have in place content moderation measures to remove harmful content. A draft code was released for public consultation in October that requires content moderation for both "harmful content" (including sexual content, violence, suicide and self-harm, and cyberbullying) and "inappropriate content," with stricter standards applying to "harmful content."
- In November, the Cyberspace Administration of China (CAC) published "Guidelines for the Construction of Mobile Internet Mode for Minors", which inter alia mandates content filters for “age-appropriate material”.
- In December, the UK Ofcom published its illegal content risk assessment guidance and first version of illegal harms codes of practice, kickstarting the first set of duties under the UK’s Online Safety Act (OSA). The OSA takes a proportionate approach **—** how the services will be regulated depends on the size of the service and the risks of encountering illegal content on the service. Ofcom emphasises that services should carry out the measures in a way that is cost-effective and proportionate to them. In terms of content moderation, there’re a core set of measures that are recommended for all services, and more rigorous measures for larger and/or riskier services. Broadly speaking:
- All U2U services must have systems and processes designed to swiftly take down illegal content once they become aware of it (reactive measures).
- For multi-risk U2U services and large U2U services, more rigorous proactive requirements apply. These include the development and implementation of internal content policies, performance targets, and content review prioritisation based on factors such as content virality, potential severity, and likelihood of illegality, including trusted flagger reports.
- These larger services must also properly resource their content moderation functions to effectively implement their policies and targets, while ensuring moderators receive appropriate training. Notably, Ofcom has taken a flexible approach regarding the use of automated tools versus human review for content moderation. While services typically employ both methods, Ofcom's proposals do not mandate specific ratios between automated and human review processes.
- For larger and higher-risk services specifically, automated content moderation techniques such as "hash matching" and URL detection must be implemented to analyse content for CSAM (Child Sexual Abuse Material).
- In July, in a pair of cases, Moody v. NetChoice and NetChoice v. Paxton, the US Supreme Court unanimously vacated the decisions of two lower courts regarding laws passed in Florida and Texas, which were intended to change how social media companies moderate user posts on their platforms. The takeaway from these cases is that in the view of the unanimous court, a platform’s decisions about what content is shown and prioritized on their platform are themselves expressive forms of speech, and that any legislative attempt to regulate those decisions must be closely scrutinized for facial unconstitutionality. The decision is a win for free speech advocates, and casts doubt on the future of content moderation laws in the US.
Dark patterns
Another key regulatory focus in 2024 was addressing manipulative design practices that exploit children's vulnerabilities online. In July, the Global Privacy Enforcement Network examined more than 1,000 websites and mobile apps, and alleged that the majority used deceptive design patterns that make it difficult for users to make privacy-protective decisions. The GPEN Sweep involved 26 data protection authorities from around the world, prompting regulators around the world to take investigatory and enforcement actions. Major developments included new frameworks for regulatory cooperation, and substantial enforcement actions against leading platforms.
- In February, the EU DSA introduced an outright ban on the use of dark patterns on the interface of online platforms, and started 9 investigations to verify whether users are prevented from making autonomous and informed choices or decisions.
- In May, the Netherlands Authority for Consumers and Markets (ACM) fined Epic Games €1.125 million, determining that Epic used unfair commercial practices aimed at children in Fortnite. In the ACM's determination, Epic placed undue pressure on children by using ads which directly exhorted children to make purchases, and by using deceptive countdown timers for items on offer, misleading children into believing they had limited time to make purchases.
- In July, off the back of the GPEN report, the Spanish Data Protection Agency (AEPD) also published a report on the influence of deceptive design patterns on minors.
- In October, the EU Commission published their Digital Fitness Check, which assessed the effectiveness of the current European consumer protection framework and sets the grounds for future initiatives. Among the key findings, dark patterns and addictive game design features were identified as being particularly problematic, especially for young gamers.
- In November, the French data protection authority (CNIL) and the consumer protection authority (DGCCRF) signed a new cooperation protocol. The two authorities aim to harmonise their interpretations between the data protection and consumer protection frameworks, with a particular focus on dark patterns
Transparency: privacy policies & loot box disclosures
Finally, 2024 saw many regulatory initiatives requiring service providers to increase transparency in the provision of online services to children, particularly on two key areas: privacy policies and loot box disclosures. Major developments included comprehensive guidance on privacy policy writing, and stricter requirements for disclosing loot box probabilistic mechanics in games.
Privacy Policies
- In March, Singapore's Personal Data Protection Commission (PDPC) published the Advisory Guidelines on the PDPA for Children's Personal Data in the Digital Environment, which acknowledges that children develop at different rates and have varying capabilities. Since no single communication approach works for all children, the PDPC holds that organisations must tailor their content with age-appropriate language and media formats, such as infographics and video clips. Importantly, privacy policies must use clear, child-friendly language to ensure children can understand the implications of giving and withdrawing their consent.
- In April, Korea's Personal Information Protection Commission issued a 160 page guide on how to prepare personal information processing policy, which recommends a separate children’s privacy policy, and even includes a detailed example of how to write a children’s privacy policy. It enumerates how common items can be rephrased into child-friendly language, and includes instructions for children to exercise their data subject rights.
- In August, the Saudi Data & AI Authority published new guidance which recommends that, if children's data is processed, the relevant privacy policy should be age appropriate.
Loot box probability mechanics
- In April, it was reported that South Korea's Fair Trade Commission commenced action against a number of game publishers alleging unfair and deceptive practices with respect to loot box probability mechanics. In July, it was reported that the South Korean Game Rating and Administration Committee (GRAC) reviewed 1,255 games since the new loot box probability disclosure rules came into force on March 22, 2024. GRAC identified 266 total games (60% of which are from foreign-headquartered publishers) that they allege violate the rules. Correction orders have been issued and GRAC has indicated that games could be banned from distribution if they fail to comply.
- In July, the UK Advertising Standard Authority (ASA) upheld a complaint against Electronic Arts for an inadequate loot box disclosure in an ad for a mobile game. In this case, a warning that stated the game "Includes optional in-game purchases (includes random items)" was displayed in a small, light-grey font, and appeared on-screen for two seconds, which the ASA found insufficient.
- In October, the EU Commission's Digital Fitness Check also identified loot boxes as a concerning addictive design feature as an area for improvement in consumer protection. Against this background, the EU Commission announced that it will analyse whether additional legislation or other action is needed in the medium-term to ensure equal fairness online and offline.
Looking Ahead
These developments reflect a growing global consensus on the need for stronger digital protection frameworks, particularly for minors. The balance between automated and human content moderation, the fight against dark patterns, and the push for greater transparency are likely to remain key regulatory priorities moving forward.