Starting with the US, Apple will soon be rolling out a detection mechanism for images of child sexual abuse, a landmark move to protect children’s safety online. In 2020, reports of online enticement increased by nearly 98 per cent – illustrating the need for technology companies to do more to prevent and end child sexual abuse online. According to Apple, This feature is coming in an update later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey.
“If we truly want to end online child sexual abuse, this is the type of engagement from the technology sector that we need,” said Marija Manojlovic, the Director of the End Violence Partnership’s Safe Online initiative. “In the future, we would also welcome a higher degree of engagement between the technology sector and organizations who are working in this field, especially those that are supporting vulnerable children and victims of abuse.”
The new technology works by scanning images when they are being uploaded onto iCloud Photos. In the process, an automated detection system will flag images previously confirmed to be child sexual abuse material by the National Center for Missing and Exploited Children (NCMEC).
This triggers a multi-step process, in which that content is reviewed by a human. If the material is confirmed to feature child sexual abuse, the user’s account will be disabled, NCMEC will be notified, and the content will be removed.
Additional safety measures include communications tools that help parents support their children’s digital safety, warnings about sensitive content through iMessage, and updates to Siri and Search that helps parents and children access expanded information about online sexual abuse.
Though the tension between safety of children and privacy of users is ongoing, this new identification system will help flag – and remove – child sexual abuse material and point to a future where the privacy of users’ content is preserved while children are also protected from online sexual abuse.