Children are regularly using technology, the internet, and apps for everything from education to gaming and entertainment. A 2020 Pew Research survey concluded that 80% of parents said their children aged five to 11 use a tablet, and 63% said their children use smartphones. Even young children have a digital presence; 48% of parents said their children aged five and younger use a tablet, and 55% use a smartphone. Technology is meeting kids where they are, but is it prepared to protect them simultaneously? As these younger generations come of age, brands are the ones being held accountable for responsible data sharing and ethical practices for youths online.

On the wider web, brands, countries, and corporations took major steps towards protecting these minors in August of this year.

WEB bruce mars ze Cdye9b Um I unsplash
Image by bruce mars, courtesy of Unsplash

China sued Tencent after Beijing prosecutors determined WeChat’s Youth Mode did not comply with laws protecting minors. Limits for minors playing popular games, such as Honour of Kings, were set as a result of the findings as well. Could the case set a precedent? Tencent’s share price fell more than 10%, indicating these repercussions were not taken lightly by investors, either.

Apple introduced new child protection technology, aiming to prevent child sexual abusive material from ever appearing in the iCloud. The tech in question allows Apple to scan users' phones for images of abusive content.

As a part of its new series of protections, Google announced that it would restrict ad targeting of minors and will remove images of minors from Google search results at their request. Google will also restrict ad targeting based on the age, gender, or interests of users under 18.

WEB jelleke vanooteghem chuzev Dl4q M unsplash
Image by Jelleke Vanooteghem, courtesy of Unsplash

The move comes soon after similar regulations that Facebook and Instagram put into place for minors. On Instagram, younger users can only be targeted on their age, gender and location. Facebook made moves to make accounts for those under 16 private by default, and introduced restrictions for “potentially suspicious accounts,” as the app put it: users who have recently been reported or blocked by a minor will find their algorithm has recalculated not to include young people at all.

Tech platforms are also implementing new restrictions to protect kids’ health. TikTok has made substantial adjustments to the app aimed to help teenage users’ wellbeing. To promote healthy habits and better sleep, the app will pause push notifications after 9:00 pm for users aged 13-15, and after 10:00 pm for those 16-17 years old. The platform, which currently defaults to restrict direct messages for users 13-15, has expanded that setting for users 16-17 as well. Now, users up to age 17 must opt in to direct messaging in order to utilize the feature.

WEB Googlers at work
Courtesy of Google

As kids continue to spend more time online, brands need to adjust their policies and platforms to better protect them. Any brand in the technology space should refocus its offerings to ensure they’re protecting younger users on their platforms, websites, or apps.

Please provide your contact information to continue.

Before submitting your information, please read our Privacy Policy as it contains detailed information on the processing of your personal data and how we use it.

Related Content

HERO 06 Yoto Play

Protecting gen Alpha

Digital presence is ubiquitous and increasingly unavoidable. What does that mean for the youngest generations?
Read Article
HERO Screen Shot 2021 04 13 at 12 22 59 PM

Pandemic Social Media

How has COVID-19 impacted social media use?
Read Article