Apple Recognizes It Jumped Too Quickly On Its CSAM Detection System; Delays Implementation
from the good dept
Sometimes speaking out works. A month ago, Apple announced a series of new offerings that it claimed would be useful in fighting back against CSAM (child sexual abuse material). This is a real problem, and it’s commendable that Apple was exploring ways to fight it. However, the major concern was how Apple had decided to do this. Despite the fact that a ton of experts have been working on ways to deal with this extremely challenging problem, Apple (in Apple fashion) went it alone and just jumped right in the deep end, causing a lot more trouble than necessary — both because their implementation had numerous serious risks that Apple didn’t seem to account for, and (perhaps more importantly) because the plan could wipe away years of goodwill in conversations between technologists, security professionals, human rights advocates and more in trying to seek solutions that better balance the risks.
Thankfully, with much of the security community, the human rights community, and others calling attention to Apple’s dangerous approach, the company has now announced a plan to delay the implementation, gather more information, and actually talk to experts before deciding how to move forward. Apple put (in tiny print…) an update on the page where it announced these features.
Update as of September 3, 2021: Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.
It’s good that the company has finally realized that it moved too quickly on this and hadn’t necessarily understood the ramifications of its decision. It remains to be seen whether the company will actually do more in terms of realizing how dangerous its approach was, or if it will simply look to make a few cosmetic changes to the system.
Notably, this announcement came out just as Scientific American released an interesting article that warns that one of the child safety features — one that had received less concern than the others — might harm children more than it helps. This is the “communication safety in messages” feature that would scan iMessages of kids under 13, blur messages that the system deemed sexually explicit, and alert parents if the kid sends or opens such a message.
There were some initial concerns about this — especially regarding LGTBQ children whose parents might not be understanding. However, many of those initial concerns were quieted by the details of the program — including the fact that it was opt-in and was designed to be transparent to both the kids and the parents what was happening (so no sneaky surveillance or surprise alerts). It’s also specifically designed for child accounts that are set up in Family Sharing.
However, as the SciAm article notes, this system still should raise concerns, because it would make kids think they’re always being watched:
In fact, even by having this feature, we are teaching young people that they do not have a right to privacy. Removing young people?s privacy and right to give consent is exactly the opposite of what UNICEF?s evidence-based guidelines for preventing online and offline child sexual exploitation and abuse suggest. Further, this feature not only risks causing harm, but it also opens the door for wider intrusions into our private conversations, including intrusions by government.
We need to do better when it comes to designing technology to keep the young safe online. This starts with involving the potential victims themselves in the design of safety systems. As a growing movement around design justice suggests, involving the people most impacted by a technology is an effective way to prevent harm and design more effective solutions. So far, youth haven?t been part of the conversations that technology companies or researchers are having. They need to be.
Again, there are real concerns here, and parents obviously want to protect their children. But over and over again we’ve seen the way you do that is by teaching them how to handle dangerous situations, rather than adding yet another layer of surveillance. The surveillance teaches them not only that they have no privacy, but similarly takes away their own agency in learning how to deal with difficult situations themselves.