BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Apple’s CSAM Dilemma Risks Impacting iPhone 14

Following
This article is more than 2 years old.

Apple has suddenly trapped itself in a security and privacy nightmare, just as iPhone 13 hits the streets. This now threatens to damage the next 12-months leading to iPhone 14 and is starting to look like an impossible problem for Apple to solve.

The run-up to this week’s Apple iPhone and iOS launches has been a disaster by their standards. Repeated security warnings as Israel’s NSO reportedly exploited multiple vulnerabilities have added to its ill-timed and ill-conceived idea to run on-device screening for known child abuse imagery and explicit iMessage photos.

Apple remains committed to catching up its peers and adding some form of CSAM screening to its cloud platform, but considering the backlash has “decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

The most stinging criticism of Apple’s plans was the risk that it would kowtow to U.S. and foreign government demands to expand the scope of on-device screening, that adherence to “local laws” would force its hand. Apple’s defense was to assure that it would never bend to such demands, that it had protections in place. Unfortunately, just as iPhone 13 launched, those assurances have been very publicly dashed.

It isn’t China this time—where Apple has continually struggled to fend off government pressure, it’s Russia instead. Apple has a history of removing apps that displease the authorities in China, and now it has done the same with voting apps that threaten to undermine the status quo in Russia.

As reported by the New York Times, “Apple and Google removed an app meant to coordinate protest voting in this weekend’s Russian elections from the country on Friday... The decisions came after Russian authorities, who claim the app is illegal, threatened to prosecute local employees of Apple and Google.”

For Apple, this could not have come at a worse time. As the Times pointed out, what happened in Russia was “a display of Silicon Valley’s limits when it comes to resisting crackdowns on dissent around the world.” Completely undermining the CSAM scope creep assurances that Apple pushed so hard in response to the recent backlash.

It turns out that Apple could not ignore the self-built mantra of what happens on your iPhone stays on your iPhone. But Apple also clumsily boxed itself in. The company argued that cloud photo screening as done by Google and Microsoft and others is privacy-invasive: “Existing techniques as implemented by other companies,” Apple said, “scan all user photos stored in the cloud. This creates privacy risk for all users.”

Apple used this risk to push its on-device alternative, “providing significant privacy benefits by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.” In retrospect, it should have left itself some room.

As I’ve said before, re-introducing on-device screening will be impossible without a repeat backlash. Apple has already explained in detail the security and privacy measures guarding its approach, which didn’t help last time. It’s unclear what else it could do, save watering down the measures to render them impotent.

If Apple was under any doubt as to the strength of feeling, then the well-reported protests outside Apple stores will have quickly changed that. “Organized by Fight for the Future, the Electronic Frontier Foundation, and a network of volunteers, the protests are demanding that Apple permanently shelve their dangerous proposal to install photo and message scanning malware on millions of people’s devices.”

What Apple should have done is quietly add some form of CSAM photo screening on iCloud, explaining that this has become the industry norm, but with some Apple-esque privacy innovations to make its system optically better than the others.

The only issue here would be any plans to end-to-end encrypt iCloud Photos, and some argued that its approach was prep for that. But if that’s the case, then any client-side screening of end-to-end encrypted data would undermine that encryption anyway, just as we saw with iMessage explicit imagery; and so, that wouldn’t work either.

Apple went to such lengths to explain why cloud screening is a bad idea from a privacy perspective, that it’s now hard for it to backtrack and opt for that solution without either appearing hypocritical or adding enough cleverness to escape its own trap. Any way you look at it, cloud screening users’ photos represents a privacy risk, albeit users have implicitly accepted this on CSAM grounds with no backlash.

But here Apple has also created a new issue for itself and others. The online debate that its CSAM moves fueled led to more detail on such screening measures than has been that openly in the public domain before. Putting aside the risk that bad actors without the tech savvy to explore the darker recesses of the web now know more than they did, the wider issue is that the public can no longer dismiss this as robotic AI.

Apple’s confirmation that human reviewers would rule out false positives highlighted the very concept of such false positives. “Any time an account is identified by the system, Apple conducts human review before making a report... as a result, system errors or attacks will not result in innocent people being reported.” So now, we have questions for other platforms as to how many false positives are triggered and ruled out by human reviews? Just how private are cloud photo platforms after all?

Apple is clearly under pressure to step up its efforts to fight CSAM, having fallen behind its peers with the measures it has in place. And notwithstanding the issues it has aired around false positives and human reviews, it needs to do something.

The reality is that on-device screening will be an impossible sell. Apple can of course ignore the protests and hope they ultimately melt away, but the integrity of iPhone privacy will have been breached, and that’s a one-way street. The company needs to find words to sell cloud screening that are not tripped up by its prior messaging.

And this needs to happen fairly quickly. Apple certainly needs this resolved before next fall, because if it takes the view that this is a cornerstone of an OS update, and thus now falls to iOS 16, then it will not want its next launch damaged. If the alternative is an iOS 15 mid-life update, then this would be the most controversial such update in memory, and runs the risk of users just holding back.

Alternatively, if Apple opts for an iCloud update, one that doesn’t require a device update and which operates in the same way as its peers, then Apple needs to set our minds at rest as to the details behind that, and how it can reverse its earlier warnings. Right now, Apple isn’t saying anything more than was in its delaying statement.

This is a nightmare for Apple. Its backlash followed by backtrack on CSAM followed WhatsApp’s on Facebook terms and then Google’s on FLoC—terrible optics for privacy-centric Apple. Whatever else happens, as soon as Apple comes back with its revised proposals, the Russian political retreat will loom large in any response. How Apple resolves this seemingly impossible problem is critical.

Follow me on Twitter or LinkedIn