Apple Unveils Child Sexual Abuse Material (CSAM) Detection Efforts with Release of New Update

On August 5, 2021, Apple announced that with the release of its iOS 15, iPadOS 15, watchOS 8, and macOS Monterey updates (by the end of the year), all images uploaded to iCloud will be scanned for child sexual abuse material (CSAM). Any detected images of child exploitation will then be reported to law enforcement. While this represents a much-needed step toward protecting victims of CSAM and prosecuting perpetrators, this announcement has drawn massive amounts of criticism, confusion, and even calls to boycott Apple products. Apple-users are voicing concerns about privacy, the functionality and logistics of the software and reporting system, and how false positives could be incriminating. Many are also skeptical that the software will work as intended, noting Facebook and Twitter’s failed attempts to detect CSAM. 


Failed Attempts

While Facebook uses software to scan and report CSAM, its efforts have been widely unsuccessful. In fact, Facebook is considered the world’s leading facilitator of CSAM and human trafficking, with Facebook and Facebook Messenger being responsible for “16.8 million of the 18.4 million reports worldwide of CSAM (91 percent of the total)” in 2018. This is even more concerning considering that Facebook is only capable of detecting “previously identified images but has trouble detecting new images, videos and livestreaming.” And, while Facebook sends any reported CSAM from its platform to be confirmed by humans at “non-profit groups such as the National Center for Missing and Exploited Children,” these organizations end up being swamped by the number of materials they receive and are unable to thoroughly investigate the reported images. Additionally, the website lacks a formal age verification process, allowing kids to go on the site and be exposed to potential predators. Perhaps most worrisome is that, in 2020, Facebook spent nearly $20 million lobbying the United States government, much of which was spent fighting proposed legislation that would enforce online safety measures, curb sexual exploitation, and limit data collection from children.

As a hotbed for CSAM content, Twitter has been found to be responsible for 

“nearly half of the child abuse content in the social media space.” Despite this fact, the company has done little to remedy its CSAM problem, and has even made the problem worse according to some experts. Michael Salter, a University of New South Wales Associate Criminology professor noted that Twitter’s covert changing of its terms of service allowed for ‘“discussion about “attraction towards minors” with the proviso that “they don’t promote or glorify child sexual exploitation in any way.”’ Professor Salter continued to add that this allows “large groups of pedophiles” to have “unmonitored public conversations,” leading to an increase of “users who endorse contact offending, justify child sexual abuse material, and demand access to child sexual abuse dolls.” 

Of course, Twitter refuted these claims, as a spokesperson revealed that Twitter, “from January to June 2019, suspended a total of 244,188 unique accounts for violations related to child sexual exploitation.” Twitter also revealed that it uses PhotoDNA and various technological tools to flag 91% of these suspended accounts.However, all this is in vain, since “Twitter’s head of product has endorsed encrypting Twitter’s DM function,”meaning that CSAM will not be reported to authorities. 

Even when CSAM is reported to authorities, Twitter is exceedingly reluctant to take it down. In December of 2019, a now 17-year-old Floridian and his mother sued the company for refusing to take down what they described as “child porn” that he was coerced and blackmailed into making at the age of 13 by sex traffickers on Snapchat. The content had been reshared and received hundreds of thousands of views, but even when the family explained the context behind the horrific situation and asked for the content to be taken down, Twitter’s denied their request because “they didn’t find a violation of [their] policies,” and “no action [would] be taken.” It was only until the family “connected with an agent from the Department of Homeland Security” that the videos to be removed from the site. This gross contradiction between their initial statement and User Agreement shows Twitter’s overall lack of concern for thwarting the spread of CSAM, unlike what Apple seems to be pledging in its most recent statement about its new, CSAM-scanning software. 


How the Software Works

Apple’s new update will allow Apple to search iCloud photos for CSAM and “report instances of CSAM to the National Center for Missing and Exploited Children.” The mention of iCloud should be duly noted, as videos and “private photo libraries that haven’t been uploaded to iCloud” will not be scanned. This advanced detection is possible through a cryptographic scanning technology called NeuroHash, which “creates a string of numbers and letters — called a “hash” — for each image and then checks it against…” “a database of hashes provided by …(NCMEC).” To further clarify, only hashes “on both Apple’s servers and user devices” are being scanned, not “actual images.” The process itself is happening only on the user’s iPhone, not at Apple or on their cloud.This, according to Avast Chief Privacy Officer Shane McNamee says, is actually “pro-privacy,” since “[the] data [is not being pulled] off their phone[s] and onto [Apple’ servers…minimizing the data you’re sending.” 


Concerns 

Apple claims there will not be issues of “detect[ing] parents’ photos of their kids in the bath, for example, as these images won’t be part of the NCMEC database.” According to the company, there will not be a lot of errors derived from using this system, as there are a number of safeguards, including the threshold system, which ensure that “lone errors will not generate alerts, allowing [A]pple to target an error rate of one false alert per trillion users per year.” It should also be noted that this is the first time in the process that a person (an employee at Apple) will be able to look at the user’s images. Apple has also announced that there is an opportunity to make an appeal to the company if a user “think[s] their account was flagged by mistake.”

On the subject of user privacy, it should mentioned that Apple’s commitment and prioritization of it are unchanging, as it has always voiced its support for “encrypted iMessages,” and has even established the precedent of withholding data from governments that press the company to release it, leading to various legal battles between governments and Apple. 

Additionally, Apple has worked to ensure that this new technology is effective even when reshared CSAM has been heavily edited and disguised using “additional layers of scanning called ‘threshold secret sharing’ so that ‘visually similar’ images are also detected.” This detection is determined by a scoring system: if an average is confirmed to be a match for verified CSAM and is “above a certain percentage score, [the photos will] move on to the next review phase.” So, of course, pictures that are exactly the same will be considered a 100% match, but if there is an alteration to the photo through cropping, for example, it will be just a 70% match, but still enough to “move on to the next review phase.” Then, these photos are reviewed by humans, after which “[i]f child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified.”


Additional Efforts

While Apple has not historically been at the front of the fight against CSAM, the company has made much progress. In the past, the company consistently flagged fewer cases of child sexual abuse than its tech rivals.Last year, for instance, Apple reported just 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.” However, Apple’s has announced that, in a change from its past policies, its new update will focus on protecting victims of child sexual abuse and human trafficking while maintaining its dedication to user privacy. The company is also working to extend these anti-CSAM efforts to iMessage, Siri, and Search. For iMessage, if a child in an iCloud Family Sharing plan receives sexually explicit content, they will be presented with a blurred image and sensitivity warning. If the content is opened, the child will receive information about why it is sensitive, and, more importantly, a parent registered to the iCloud Family Sharing plan will receive a notification. A similar procedure will happen if a child tries to send sensitive material, and if they are under 13, their parents will receive a message alerting them that sensitive content was sent. This will be accomplished through on-device machine learning and end-to-end iMessage encryption.

As for Siri, if an Apple-user asks Siri how to report CSAM or child exploitation, they will be given appropriate resources and steps to take. Apple has also planned an intervention for the people looking up CSAM, namely an explanation that interest in this content is harmful and a list of resources for getting help. This is a stark contrast to Twitter’s update to its terms of service, which essentially protected and saw no problem with pedophiles on its platform. If all goes as planned, Apple’s new CSAM-scanning software could save the lives of millions of children and even become the new standard for how technology companies address child abuse and exploitation. With successful results, we are likely to see similar initiatives by other technology giants in the near future.


Afia, originally from New Jersey, is a rising second-year student at Bowdoin College in Maine. She is currently undeclared, but interested in pursuing an Asian Studies and Government and Legal Studies double major. As a Japanese language student and a budding writer, Afia is no stranger to the power of words, so she is interested in using them to communicate and educate people on important issues. Some of Afia’s favorite hobbies include cooking (especially oatmeal), watching anime, listening to music, and watching random shows on Youtube.

Leave a Reply

Your email address will not be published. Required fields are marked *