Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos

from the scanning-has-problems dept

Over the last few years, there has been a lot of attention paid to the issue of child sexual abuse material (CSAM) online. It is a huge and serious problem. And has been for a while. If you talk to trust and safety experts who work in the field, the stories they tell are horrifying and scary. Trying to stop the production of such material (i.e., literal child abuse) is a worthy and important goal. Trying to stop the flow of such material is similarly worthy.

The problem, though, is that as with so many things that have a content moderation component, the impossibility theory rears its head. And nothing demonstrates that quite as starkly as this stunning new piece by Kashmir Hill in the New York Times, discussing how Google has been flagging people as potential criminals after they shared photos of their children in response to requests from medical professionals trying to deal with medical conditions the children have.

There is much worth commenting on in the piece, but before we get into the details, it’s important to give some broader political context. As you probably know if you read this site at all, across the political spectrum, there has been tremendous pressure over the last few years to pass laws that “force” websites to “do something” about CSAM material. Again, CSAM is a massive and serious problem, but, as we’ve discussed, the law (namely 18 USC 2258) already requires websites to report any CSAM content they find, and they can face stiff penalties for failing to do so.

Indeed, it’s quite likely that much of the current concern about CSAM is due to there finally being some level of recognition of how widespread it is thanks to the required reporting by tech platforms under the law. That is, because most websites take this issue so seriously, and carefully follow the law, we now know how widespread and pervasive the problem is.

But, rather than trying to tackle the underlying problem, politicians often want to do the politician thing, and just blame the tech companies for doing the required reporting. It’s very much shooting the messenger and using the fact that the reporting by tech companies is shining a light on the underlying societal failures that resulted in this, as an excuse to blame the tech companies, rather than the societal failings.

It’s easier to blame the tech companies — most of whom have bent over backwards to work with law enforcement and to build technology to help respond to CSAM — than to come up with an actual plan for dealing with the underlying issues. And so almost all of the legal proposals we’ve seen are really about targeting tech companies… and, in the process, removing underlying rights. In the US, we’ve seen the EARN IT Act, which completely misdiagnoses the problem, and would actually make it that much harder for law enforcement to track down abusers. EARN It attempts to blame tech companies for law enforcement’s unwillingness to go after CSAM producers and distributors.

Meanwhile, over in the EU, there’s an apparently serious proposal to effectively outlaw encryption and require client-side scanning of all content in an attempt to battle CSAM. Even as experts have pointed out how this makes everyone less safe, and there has been pushback on the proposal, politicians are still supporting it by basically just repeating “we must protect the children” without seriously responding to the many ways in which these bills will make children less safe.

Separately, it’s important to understand some of the technology behind hunting down and reporting CSAM. The most famous of which is PhotoDNA, initially developed by Microsoft and used among many of the big platforms to share hashes of known CSAM material to make sure that the material that has been discovered isn’t more widely spread. There are some other similar tools, but for fairly obvious reasons these tools have some risks associated with them, and there are concerns both about false positives and about who is allowed to have access to the tools (even as they’re sharing hashes, not actual images, the possibility of such tools to be abused is a real concern). A few companies, including Google, have developed more AI-based tools to try to identify CSAM, and Apple (somewhat infamously) has been working on its own client-side scanning tools along with cloud based scanning. But client-side scanning has significant limits, and there is real fear that it will be abused.

Of course, spy agencies also love the idea of everyone being forced to do client-side scanning in response to CSAM, because they know that basically creates a backdoor to spy on everyone’s devices.

Whenever people talk about this and highlight the potential for false positives, they’re often brushed off by supporters of these scanning tools, saying that the risk is minimal. And, until now, there weren’t many good examples of false positives beyond things like Facebook pulling down iconic photographs, claiming they were CSAM.

However, this article (yes, finally we’re talking about the article) by Hill gives us some very real world examples of how aggressive scanning for CSAM can not just go wrong, but can potentially destroy lives as well. In horrifying ways.

It describes how a father noticed his son’s penis was swollen and apparently painful to the child. An advice nurse at their healthcare provider suggested they take photos to send to the doctor, so the doctor could review them in advance of a telehealth appointment. The father took the photos and texted them to his wife so she could share with the doctor… and that set off a huge mess.

In texting them — in Google’s terms, taking “affirmative action,” — it caused Google to scan the material, and it’s AI-based detector flagged the image as potential CSAM. You can understand why. But the context was certainly missing. And, it didn’t much matter to Google — which shut down the guy’s entire Google account (including his Google Fi phone service) and reported him to local law enforcement.

The guy, just named “Mark” in the story, appealed, but Google refused to reinstate his account. Much later, Mark found out about the police investigation this way:

In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.

“You have to talk to Google,” Mr. Hillard said, according to Mark. “There’s nothing I can do.”

In the article, Hill highlights at least one other example of nearly the same thing happening, and also talks to (former podcast guest) Jon Callas, about how it’s likely that this happens way more than we realize, but the victims of it probably aren’t willing to speak about it, because then their names are associated with CSAM.

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries in this particular coal mine.”

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.

There’s so much in this story that is both horrifying, but also a very useful illustration of the trade-offs and risks with these tools, and the process for correcting errors. It’s good that these companies are making proactive efforts to stop the creation and sharing of CSAM. The article already shows how these companies go above and beyond what the law actually requires (contrary to the claims of politicians and some in the media — and, unfortunately, many working for public interest groups trying to protect children).

However, it also shows the very real risks of false positives, and how it can create very serious problems for people, and how very few people are even willing to publicly discuss it for fear of the impact on their own lives and reputations for even highlighting the issue.

If politicians (pushed by many in the media) continue to advocate for regulations mandating even more aggressive behavior from these companies, including increasing liability for missing any content, it is inevitable that we will have many more such false positives — and the impact will be that much bigger.

There are real trade-offs here, and any serious discussion of how to deal with them should recognize that. Unfortunately, most of the discussions are entirely one-sided, and refuse to even acknowledge the issue of false positives and the concerns about how such aggressive scanning can impact people’s privacy.

And, of course, since the media (with the exception of this article!) and political narrative are entirely focused on “but think of the children!” the companies are bending even further backwards to appease them. Indeed, Google’s response to the story of Mark seems ridiculous as you read the article. Even after the police clear him of any wrongdoing, it refuses to give him back his account.

But that response is totally rational when you look at the typical media coverage of these stories. There have been so many stories — often misleading ones — accusing Google, Facebook and other big tech companies of not doing enough to fight CSAM. So any mistakes in that direction are used to completely trash the companies, saying that they’re “turning a blind eye” to abuse or even “deliberately profiting” off of CSAM. In such a media environment, companies like Google aren’t even going to risk missing something, and its default is going to be to shut down the guy’s account. Because the people at the company know they’d get destroyed publicly if it turns out he was involved in CSAM.

As with all of this stuff, there are no easy answers here. Stopping CSAM is an important and noble goal, but we need to figure out the best way to actually do that, and deputizing private corporations to magically find and stop it, with serious risk of liability for mistakes (in one direction), seems to have pretty significant costs as well. And, on top of that, it distracts from trying to solve the underlying issues, including why law enforcement isn’t actually doing enough to stop the actual production and distribution of actual CSAM.

Filed Under: , , , , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos”

Subscribe: RSS Leave a comment
87 Comments

This comment has been flagged by the community. Click here to show it.

Naughty Autie says:

Re:

The thing is, Mark (the guy at the centre of this case) absolutely can sue Alphabet and Section 230 isn’t in it. By cutting off Mark’s access to his devices over a single small series of images of his son’s penis for medical purposes and leaving the police no way to tell him that he had no case to answer so he could pass that information on at the relevant time, Alphabet went beyond moderation and into breach of contract.

Anonymous Coward says:

Re: Re: Re:

From Google Fi’s ToS:

We may review Data Content to determine whether it is illegal or violates our policies, and we may remove or refuse to display Data Content that we reasonably believe violates our policies or the law.

Since the ToS say nothing about revoking access to accounts in the event that a Google Fi user sends or receives data content Alphabet deems illegal, then there does seem to be a breach of contract on the company’s part just as Autie suggested.

Bergman (profile) says:

Re: Re: Re:2

There is no requirement in contract law that the contract specify that they obey the laws they are required to. In fact, any contract term that violates the law is unenforceable, null and void.

Their user agreement says they can revoke access for any reason or none at all, which certainly covers revoking service to those accused of trafficking in CSAM.

Anonymous Coward says:

Re: Re: Re:2

I’m asking what part of which contract they’re accused of breaching. My understanding is that, in the USA, landline phone companies could not arbitrarily refuse to service someone—being basically necessities, they were made a regulated common-carrier service, whereas cellphones are a new and optional fad for rich doctors and stock-traders and don’t need to be regulated.

I guess the Google Fi terms of service would be the relevant contract (the text is invisible but can be revealed by disabling stylesheets). It says: “You must have an active Google Account to use the Services […] Please be aware that any suspension or termination of your Google Account may cause your Google Fi account to also be suspended or terminated.”

Further, it also says Google (and affiliates etc.) are not responsible for, basically, anything. Including “Providing or failing to provide the Services”. It seems like a technicality to call that a contract.

Anonymous Coward says:

Re: Re: Re:3

…cellphones are a new and optional fad for rich doctors and stock-traders and don’t need to be regulated.

Tell that to any homeless person in the UK and they’ll laugh in your face. Many have cell phones to gain access to essential services, some fairly expensive from before they became homeless, and many purchased second hand.

Naughty Autie says:

Re: Re: Re:3

From the article:

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked.

Clearly, Google Fi was providing Mark’s phone line as well as his internet connection, making it a common carrier according to the FCC definition, and it was therefore illegal for the company to cut off his account not only without warning, but also before he had been charged with any crime.

ECA (profile) says:

Re: Wow,

The service provider?
Google? They arnt a service provider for 99% of the nation. And I dont think they are in California yet.
As a cellphone service, they DID what the law told them to DO. So Who is liable?

Point a finger with no reasoning, and you have 4 fingers pointing at YOU.

The hardest thing to change is Mankind. its easy to blame everyone and everything. ‘THAT made me mad, so I HIT MY KID, and BEAT MY WIFE’, really isnt good reasoning.
This is like the Current, Marijuana vs Alcohol debates.

They have broken up the ideals from the past, of KNOWING your neighbors. Sharing and caring FRO each other. Everything MUST COST money?
They have taken the Old corp ways of controlling our lives, from work to play to the grocery store. And EMBEDDED them into out lives. WE MUST have MONEY, as we SPEND money, to MAKE more money, case we cant LIVE without MONEY.

THe only times it gets confusing is, When nothing seems to be done, or there is TO MUCH that seems to be happening.

So lets fall back to what you said. Lets fall to Google and the cellphone company who were following the law, then take that LAW back, And loose in court, for a couple million. GUESS WHAT, I AINT PAYING IT, YOU ARE.

Anonymous Coward says:

Re: Re:

Lets fall to Google and the cellphone company who were following the law, then take that LAW back…

Google was required to freeze “Mark” out of his accounts pending investigation. It was NOT required to reinstate his accounts when he was cleared, so they didn’t.

So no. The issue isn’t that Google was or wasn’t following the law, since once he was cleared, the law no longer applied. The issue (or rather, THAT issue, as there are others) is about what the company did not do.

And about that, as TOG said,

… Google almost certainly has a ‘we retain the right to revoke or suspend your account at any time at our discretion and whether we walk that back is entirely up to us’ clause in their TOS.

And given the publicity of this case, it may well happen that Mark might get his account back, unlike the folks who did not get their story written up and viralized.

Stephen T. Stone (profile) says:

Re:

This is why it’s important for users to be able to sue service providers for relief.

When service providers do a thing that runs afoul of the law and harms people in tangible, provable ways? Yes, people should be able to sue those service providers. That said…

Not all decisions to cutoff an account are legitimate moderation.

…general content/account moderation isn’t one of those things.

Anonymous Coward says:

Re: Re:

Can you admit that cutting off phone and Internet access went beyond “general content/account moderation”? By providing those, Google Fi was acting as a common carrier per the FCC definition, so shouldn’t have revoked access unless it was proved the guy in this case had broken the law. Even if he had, they weren’t required to cut off his phone line with zero notification.

Anonymous Coward says:

While this sounds like a lack of knowledge (which people can’t be blamed for), this is a good time to point out a very very important thing:

NEVER EVER spend medical information, especially other peoples medical information, electronically (there are exceptions, but they involve knowing how to do so securely… which isn’t easy). Not only is it of questionable legality (While I haven’t brushed up on US CSAM law lately, that photo may have been illegal to send to his wife, regardless of it being for medical purposes), it’s also pretty rude. Securing digital communication is, currently, extremely difficult.

Thus sharing someones medical information should be done with utmost care. I think our society has a massive failings, both in ease securing digital communications, and in educating people about the security, or lack there off in most channels.

Anonymous Coward says:

Re:

NEVER EVER spend medical information, especially other peoples medical information, electronically

Instead have them visit the medical office during a pandemic, and catch a potentially fatal disease.

As the article points out context matters, and that extends to using online service for medical consultations.

Anonymous Coward says:

Re: Re:

As the article points out context matters, and that extends to using online service for medical consultations.

“Texting them” is not even close to an acceptable way to send ostensibly-private data, particularly pictures of undressed people. Perhaps a web-based upload, with TLS, directly to a medical portal could work. But few people know enough about OPSEC to manage even that. (Like, are they gonna remember to wipe or physically destroy every storage medium the photo may have resided on?)

It’s a good example of why we need ubiquitous network security. What if this guy had done everything right, and then the doctor had dropped the pictures into a plaintext email to a specialist? And even if everything was properly encrypted, metadata like “X contacted a pediatrician, got a reply, and contacted a urologist” could be compromising.

Naughty Autie says:

Re: Re: Re:

“Texting them” is not even close to an acceptable way to send ostensibly-private data, particularly pictures of undressed people.

You didn’t read the entire article, did you? Mark didn’t text the images to the doctor’s office, he texted them to his wife. And he might still have had his Google Fi account unreasonably cut off if he had sent them to the doctor’s office himself, using his Gmail account to do so.

Anonymous Coward says:

Re: Re: Re:2

You didn’t read the entire article, did you? Mark didn’t text the images to the doctor’s office, he texted them to his wife. And he might still have had his Google Fi account unreasonably cut off if he had sent them to the doctor’s office himself, using his Gmail account to do so.

I could ask if you actually read the AC’s comment, but I’m actually going to assume you did, and simply didn’t understand what they were suggesting.

So I will paraphrase: Their suggestion is that Mark directly open up the doctors office website, which would have TLS secured (meaning encrypted) web portal to upload sensitive data. That way the only parties that have the sensitive data are Mark, and the doctors office. Google would not have had any data with witch they could have taken action.

Anonymous Coward says:

Re: Re: Re:4

A reasonable interpretation of that AC’s message is what should have happened. If the Doctors office doesn’t have that setup… they they don’t have reasonable protocol in place (using 15+ year old technology) to deal with a pandemic that’s been going on for near 3 years now.

I would say there can’t be assumption where you are describing what should have happened.

To rephrase my (original statement (the original TL comment): SMS has never been a secure protocol. This has been well known for years and years (though again society has done a very poor job of disseminating this information, a fact which Mark is likely a victim of).

Further more, a quote from the article:

The father took the photos and texted them to his wife so she could share with the doctor… and that set off a huge mess.

Yes, according to the article itself… the wife was indeed directly dealing with the Doctors office. However good security hygiene (which again society failed to provide him with) suggests that only Mark and the doctors office should have the sensitive info. If the doctors office was totally unset-up to receive information from the other parent… that would also be a terrible failing on their part.

Anyhow, I hope this clears up any remaining confusion.

Anonymous Coward says:

Re: Re: Re:5

If the Doctors office doesn’t have that setup… they they don’t have reasonable protocol in place (using 15+ year old technology) to deal with a pandemic that’s been going on for near 3 years now.

The pandemic hadn’t been going on for nearly three years when this case began back in February of last year, and the precautions taken against it, including online consults, haven’t been going on for nearly three years now. Nice attempt at a strawman, though.

However, good security hygiene (which again society failed to provide him with) suggests that only Mark and the doctors office should have the sensitive info. If the doctors office was totally unset-up to receive information from the other parent… that would also be a terrible failing on their part.

Yes, let’s pick on Mark and his family physician for having their feet knocked out from under them by a pandemic. What better way to take the heat off Alphabet for breaching a contract and stealing user data?

Anyhow, I hope this clears up any remaining confusion.

Anonymous Coward says:

Re: Re: Re:6

However, good security hygiene (which again society failed to provide him with) suggests that only Mark and the doctors office should have the sensitive info. If the doctors office was totally unset-up to receive information from the other parent… that would also be a terrible failing on their part.

Yes, let’s pick on Mark and his family physician for having their feet knocked out from under them by a pandemic. What better way to take the heat off Alphabet for breaching a contract and stealing user data?

Hmmm. I was unaware that some people considered pointing out good practice when someone violates it (out of ignorance) to be “picking on them”.

Well, I don’t hold that view at all. After all, by pointing it out other people (and Mark, if he reads Techdirt) can make more informed (and hopefully better) decisions in the future.

As for Alphabet… why are you even bringing it up. My original comment clearly indicated that I saw this as a chance for people to learn things society should have informed them of, but has systematically failed to do so.

However since you brought it up: I would also recommend people find a way to avoid Google products (I’m not even sure that Alphabet was directly involved in this case, however in my humble opinion that detail is immaterial).

As for “breach of contract”, as I avoid Google products, and didn’t see anything in the article to regarding contracts: I have nothing to say about it either way. If you want to argue about that… sure go ahead. Write up an article (or find one) that analysis the contract and breach. Then post a link. But I suggest you make that a different top level comment, as it’s unrelated (and to be honest outside the scope of my interests).

Anonymous Coward says:

Re: Re: Re:7

I was unaware that some people considered pointing out good practice when someone violates it (out of ignorance) to be “picking on them”.

Davec, is that you? After all, only a cop apologist would congratulate law enforcement efforts for going overboard and blame the victim for breaking a law they didn’t break at all.

Anonymous Coward says:

Re: Re: Re:8

Davec, is that you?

No, I’m not. But your question kinda makes it seem like you’ve decided I am. So arguing that seems kinda pointless.

After all, only a cop apologist would congratulate law enforcement efforts for going overboard and blame the victim for breaking a law they didn’t break at all.

Hmmm. I fail to see how my top level comment presents as a “cop apologist”. Nothing in there endorsed any law enforcement activity. The closest I came is speculating (in an aside) about the legality of text messages that were sent. However even if that aside turns out to be totally a incorrect (as another commenter kindly pointed out), it doesn’t change the facts that sending sensitive information through digital means is prone to getting leaked. The whole point of the post was that people should have a lot more care with sensitive information (especially when it’s information about another person, in this case a child).

Anonymous Coward says:

Re: Re: Re:8

Would it have helpe if I had of said “Alphabet(or Google)”?

I had assumed that most people were aware Alphabet was Google’s parent company. A paraphrase of that specific question is “Why are you bringing up Alphabet (or Google)?”

Of course I do later draw a distinction (since as far as I know: Alphabet does not directly handle user data … but I do not think that really makes a huge different to the subject at hand).

Naughty Autie says:

Re: Re: Re:11

Actually, AC is right. Parent companies tend to take all the rights and responsibilities of their subsidiaries on themselves, as I found out when I contacted Obsidian Entertainment for permission to use what turned out to be Bethsoft owned content in my fanfics. So I guess the mouseover text in this XKCD comic is about your comment, not AC’s.

Anonymous Coward says:

Re: Re: Re:12

Well it is definitely interesting to say “He will base his choice of who to sue on which one has the lawyers” is a good argument.

In my humble opinion the following would have been “good” arguments (assuming they were true):
* He will sue Alphabet because they are a party to the contract
* He will sue Alphabet because they handled the data.
* He wont sue Alphabet, but they will intervene in the lawsuit.
* Google Fi is merely a division of Alphabet, not a separate legal entity.

(Again if those were true. I’m pretty sure most aren’t. I haven’t checked all of them… because I don’t care, and am not actually trying to argue who he will sue).

Anyhow the rhetoric here, in this thread, has definitely been enlightening.

Anonymous Coward says:

Re: Re: Re:4

That assumes two things, that the doctors office had a web portal set up to deal with medical details

The comment being referenced did not assume that. The point was that it’s inappropriate for a doctor to ask a client to send sensitive data insecurely. If that means they have to walk over with a USB drive, or FedEx an encrypted one and phone over the password, so be it.

Anonymous Coward says:

Re: Re: Re:6

I don’t see how that’s being ignored. The sequence of events is understandable (kind of—a COVID-like pandemic was realistically predicted since SARS version 1, which means there were 15 years to set things up). That doesn’t mean that suddenly “anything goes” in terms of information security. My employer made me set up a VPN, which involved them rush-shipping me a security key. I couldn’t access a thing till that came. Luckily, couriers kept doing pickups and dropoffs the whole time.

Anonymous Coward says:

Re: Re: Re:2

Perhaps his wife was dealing with the doctor, from her work, and her husband was at home with the child. That or similar reason could be why the photos were being relayed via the wife. Indeed, there are other circumstances where a husband may send pictures to his wife when looking after children, and needing advice.

Anonymous Coward says:

Re: Re: Re:2

You didn’t read the entire article, did you? Mark didn’t text the images to the doctor’s office, he texted them to his wife.

So what? Has everyone already forgotten about when all those photos of nude celebrities leaked? If you’re aware of that, and don’t understand how it happened, it should indicate that you don’t have the OPSEC skills to deal with this type of data. It’s okay; I don’t either. It’s not in any way weird to tell a doctor you don’t have enough confidence in your technical ability to do what they’re asking.

SMS is not a direct communication between sender and receiver. The phone companies can read everything, and few people know who else is involved (look up Syniverse) or what data anyone keeps or for how long.

And he might still have had his Google Fi account unreasonably cut off if he had sent them to the doctor’s office himself, using his Gmail account to do so.

Gmail is an email provider. Email is insecure. Everybody should know email is insecure, because people have been saying it for literally decades—long before most people had email. Automated emails from insurance companies and banks remind us to never send sensitive information in that way.

As noted, even encrypted email is risky. Expect this to come up in the context of current abortion crackdowns. A young woman sends an email to an abortion clinic, and the cops can only see the From and To addresses and some unreadable encrypted data. Guess what they get if they see her walk onto a bus to that clinic’s city: probable cause (well, some judge will think so).

This situation is, of course, unacceptable. We should have secure communication. But we shouldn’t just pretend that we do.

Anonymous Coward says:

Re: Re: Re:3

As noted, even encrypted email is risky.

Okay, noted.

A young woman sends an email to an abortion clinic, and the cops can only see the From and To addresses and some unreadable encrypted data.

But didn’t you just say that email is insecure? Make up your mind, will you?

Guess what they get if they see her walk onto a bus to that clinic’s city: probable cause (well, some judge will think so).

Surely an argument for more online consults rather than less.

Anonymous Coward says:

Re: Re: Re:4

But didn’t you just say that email is insecure? Make up your mind, will you?

Crypto-nerds can sometimes figure out PGP. Good luck finding a doctor that can. Anyway, it doesn’t prevent traffic analysis, which is a significant insecurity in this example.

Surely an argument for more online consults rather than less.

You know, there’s a limit to what abortion doctors can do online. Some people want more than talk.

Anonymous Coward says:

Re: Re: Re:

Sounds like you should tell yourself that more often. As for CSAM law, it’s obvious you’ve never read the statutes in your life since federal law defines it as any visual depiction of sexually explicit conduct involving a minor, not a small series of images of a boy’s genital area for medical purposes. That’s why the guy in this case was let off.

Anonymous Coward says:

Re:

While this sounds like a lack of knowledge (which people can’t be blamed for), this is a good time to point out a very very important thing:

Never put all of your authentication eggs in one provider basket.

The person in this instance lost their phone, their email, and their ability to access numerous services predicated on either or both of those.

The same thing could have happened if the company (well, say a less megalithic company) had suddenly folded or lost connection to the wider internet.

And… from the article:

Mark still has hope that he can get his information back. The San Francisco police have the contents of his Google account preserved on a thumb drive. Mark is now trying to get a copy. A police spokesman said the department is eager to help him.

nasch (profile) says:

Re:

Not only is it of questionable legality (While I haven’t brushed up on US CSAM law lately, that photo may have been illegal to send to his wife, regardless of it being for medical purposes)

Nope, not illegal. In California, the statute reads “which involves the use of a person under 18 years of age, knowing that the matter depicts a person under 18 years of age personally engaging in or simulating sexual conduct”. Every such statute I’ve seen has similar language. The material must depict a sexual situation in order to be considered child pornography.

This comment has been flagged by the community. Click here to show it.

OGquaker says:

The Right to get removed

This has been going of for decades in politics.
As a “leftest” Church in South Central Los Angeles, i have many stories of friends, protests, public events and perps as victims removed from any public media, suddenly dead computers and/or persons reduced to homelessness within the last decades. I guess that’s better than targeted under a 70,000 pound B-52 bomb load. Care is just a four letter word

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

At what point did Google that that monitoring all communications for evidence of crimes was a good idea. Start with CSAM, and next it will be terrorism, and them any evidence of criminal activity.

Q question for you to ponder, how else was the husband going to get picture to his wife so that she could continue to deal with the medical office?

Christenson says:

ON CSAM

We find ourselves in a difficult position: The police force, in large proportion, is composed of domestic abusers, if not necessarily abusers of the policed. Some fraction of those will produce and share horrific CSAM. How to stop the CSAM without violating everyone’s rights?

I think, in the end, the trick is going to be to make on-line sex and sexuality generally safe and available with the understanding that if its made with REAL kids it’s not OK and even those that like that sort of thing help enforce it.

We are getting to a point where deepfakes makes basically any imagery without the actual people reasonably producible, even currently illegal CSAM. It would be a shame not to use that capability for good.

Hope Mark’s kid got whatever it was straightened out, it sounds painful.

Anonymous Coward says:

Re:

I think, in the end, the trick is going to be to make on-line sex and sexuality generally safe and available with the understanding that if its made with REAL kids it’s not OK and even those that like that sort of thing help enforce it.

That sounds like a good idea until you realize that some will produce real child porn and claim it’s a deep fake. That’s why realistic images are banned too.

Christenson says:

Re: Re: Impossibility

As noted by others above, complete elimination of exploited children (or my other favorite evil, domestic violence) is not a realistic possibility, it can only be minimized.

At some point, it gets to just be easier to operate the tech and do what’s legal than to deal with the human beings and the illegality.

There’s also a problem with slippery slopes — suppose someone draws some cartoons?

Finally, I’m really curious as to what the science has to say about pronography, domestic abuse, and what changes actual outcomes.

Anonymous Coward says:

Re: Re: Re:

There’s also a problem with slippery slopes — suppose someone draws some cartoons?

You’re imagining the slippery slope here. The specific statutes banning child pornography clearly define it as photographs and realistic computer generated images of minors engaged in sexual acts, and cartoons obviously don’t meet that definition. Nice slippery slope fallacy, though.

nasch (profile) says:

Re: Re: Re:2

The specific statutes banning child pornography clearly define it as photographs and realistic computer generated images of minors engaged in sexual acts, and cartoons obviously don’t meet that definition.

Federal law already, today, prohibits:

“knowingly producing, distributing, receiving, or possessing with intent to distribute a visual depiction of any kind, including a drawing, cartoon, sculpture, or painting, that, under specified circumstances, depicts a minor engaging in sexually explicit conduct and is obscene, or depicts an image that is or appears to be of a minor engaging in such conduct and such depiction lacks serious literary, artistic, political, or scientific value.”

https://www.congress.gov/bill/108th-congress/senate-bill/151

Anonymous Coward says:

Re:

We are getting to a point where deepfakes makes basically any imagery without the actual people reasonably producible, even currently illegal CSAM.

Current “commercial” application of AI in this field is… galringly obvious and in no way a good indicator of realism.

Anyone who can make that level of deepfake is likely to have a reasonable amount (read: has enough money to hire a small team of photomanipulation experts, video editors and sound technicians) of resources on their side, and is likely to be rich enough to ruin someone’s life.

That One Guy (profile) says:

Horrifying but not surprising

Unfortunately as the article notes the penalties and incentives are entirely one-sided on this issue as a false positive may screw over an individual or their family but a false negative stands to screw over the company involved, so any sane company will likely always err on the side of caution and if that means cutting off an account and keeping it cut off that’s likely what they’ll go with.

With politicians and pundits already lambasting tech companies for ‘not doing enough about CSAM’ not going overboard is just begging to be dragged over the coals and that’s before any potential false negatives, so as it stands this is horrifying for the victims but it’s also entirely within expectations for the company.

Rekrul says:

While sending such pictures is a bad idea (even though it should be a perfectly OK thing to do in a medical context), if someone were going to do this, I’d make two suggestions. Unfortunately, I’m sure both are completely beyond the capabilities of people today.

  1. Put the images into a password-protected Rar archive and send that. Sadly most people don’t even know what Rar is, let alone how to use it, and I’m sure that the doctor’s office would also have no idea what to do with such a file.
  2. Slap a large text explanation on the photos that anyone reviewing the photos can’t possibly miss or misinterpret. Like “MEDICAL PHOTO – This photograph was taken for express purpose of allowing our doctor to evaluate my child’s condition prior to his appointment.” Of course that would require them to know how to actually use a drawing program to add text to a photo. Maybe there’s an app for that? And of course, it would probably still get flagged, but maybe it would keep Google’s supposed ‘human’ reviewers from agreeing that it’s CASM.
Anonymous Coward says:

Re:

And both suggestions are added time and effort when, as it appears to be, the medical office asks for images when booking an appointment. Also who is thinking about possible misinterpretations when trying to care for one of their children?

The real problem is a witch hunt for CSAN and its perpetrators making innocent actions look like guilt, and asking non qualified people, and even worse algorithms to identify it. Witch hunts are hard on any innocent that gets caught up in them.

Rekrul says:

Re: Re:

The real problem is a witch hunt for CSAN and its perpetrators making innocent actions look like guilt, and asking non qualified people, and even worse algorithms to identify it. Witch hunts are hard on any innocent that gets caught up in them.

Agreed. It’s gotten to the point where virtually all men are suspected of being closet pedophiles.

I also take issue with the criminalization of drawings or computer rendered images. If no actual child was used in the production of such images, they shouldn’t be illegal.

Rekrul says:

I have to say that I’m surprised a doctor’s office actually accepted email. Any time I’ve had a rash or strange marks (arm, foot, etc) and wanted to email pictures, I’m told that they can’t accept email. Most of them can’t even access the net other than to run Zoom (I don’t have a web cam), so I can’t even upload the photos and tell them the URL.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...