Global tech policy wonk, amicus, friend of the little guys. I started in tech before the internet was born, rode telecom through the right angle turn, and landed in communications and strategy. All ideas are my own or so deeply faked you'll never know the difference.
While it's tempting to see this as an isolated UK issue, as long as any one of the five-eyes has the ability to undermine encryption and security, they all do. Don't hold your breath thinking the US or the Empire will do anything in opposition to this.
I can't get my head to go in this direction. If I build word processing apps, how does that make me the speaker of whatever someone using them says? Can a court publisher refuse to issue opinions they don't agree with? What is I own & code a website publishing tool, which is then used by Smith to create a site capturing the speech of a fascist I don't support ... can I block Smith?
I've been involved with the industry-created codes, and several back-and-forth exchanges with eSafety. Like most policy makers, in trying to please everyone they both champion E2EE while demanding proactive CSAM identification and secret reporting. Obviously impossible.
My read of the Commissioner's response is that anything LESS than full E2EE fails the compulsory surveillance test. Oh, and if CSAM is prohibited in your policy, you need to have a way to detect and enforce that.
One potential outcome is that E2EE will now be effectively mandatory to avoid a surveillance obligation in AUS. Brilliant! Oh, and CSAM will be scrubbed from policy statements since it couldn't be enforced anyway. Bad!
Hard to balance abandoning HK peeps or doing what one must to keep the internet marginally alive over there. Adding US liability to the mix probably doesn't make the decision easier, just more costly since then you're breaking the law either way.
I've been watching these bad ideas proliferate across the western world. While there are nuances (like common law countries versus civil law, constitutional variance, etc.), there is a pattern whereby each new jurisdiction points at those that have already taken (bad) steps and uses that as proof of a smart idea. But then, if Billy jumped off a bridge, does that mean you should? (quoting my mom).
Jurisdictions are exerting more and more regional regulation over the web. Yes, VPNs are the technical answer, but the source of the issue is political power, and the fear that politicians have that the internet is challenging theirs. Canada is ticked because tech won't comply (don't give me that attitude! ... my mom again).
Is anyone else convinced that a fragmented internet is only a few steps away? Countries are intent on forcing their own values on their local networks - they aren't going to give up just because technology doesn't work that way ... yet.
Bottom line: laws like this are proliferating, governments see tech as a threat to their absolute power, and they have the rule of law to enforce compliance without being "fair". It's only a matter of time before the fire walls go up.
I do worry a little that there was talk that AI generated content (think Bing + ChatGPT) was clearly the platform's own speech. If a thousand monkeys typing randomly one day write something defamatory the Court is going to asplode.
Techdirt has not posted any stories submitted by shinygoldengod.
Surveillance has five eyes
While it's tempting to see this as an isolated UK issue, as long as any one of the five-eyes has the ability to undermine encryption and security, they all do. Don't hold your breath thinking the US or the Empire will do anything in opposition to this.
Who's the speaker then?
I can't get my head to go in this direction. If I build word processing apps, how does that make me the speaker of whatever someone using them says? Can a court publisher refuse to issue opinions they don't agree with? What is I own & code a website publishing tool, which is then used by Smith to create a site capturing the speech of a fascist I don't support ... can I block Smith?
The (Tasmanian) Devil is in the Details
I've been involved with the industry-created codes, and several back-and-forth exchanges with eSafety. Like most policy makers, in trying to please everyone they both champion E2EE while demanding proactive CSAM identification and secret reporting. Obviously impossible. My read of the Commissioner's response is that anything LESS than full E2EE fails the compulsory surveillance test. Oh, and if CSAM is prohibited in your policy, you need to have a way to detect and enforce that. One potential outcome is that E2EE will now be effectively mandatory to avoid a surveillance obligation in AUS. Brilliant! Oh, and CSAM will be scrubbed from policy statements since it couldn't be enforced anyway. Bad!
I share your indignation, but ...
Hard to balance abandoning HK peeps or doing what one must to keep the internet marginally alive over there. Adding US liability to the mix probably doesn't make the decision easier, just more costly since then you're breaking the law either way.
Bad regs are a global contagion
I've been watching these bad ideas proliferate across the western world. While there are nuances (like common law countries versus civil law, constitutional variance, etc.), there is a pattern whereby each new jurisdiction points at those that have already taken (bad) steps and uses that as proof of a smart idea. But then, if Billy jumped off a bridge, does that mean you should? (quoting my mom). Jurisdictions are exerting more and more regional regulation over the web. Yes, VPNs are the technical answer, but the source of the issue is political power, and the fear that politicians have that the internet is challenging theirs. Canada is ticked because tech won't comply (don't give me that attitude! ... my mom again). Is anyone else convinced that a fragmented internet is only a few steps away? Countries are intent on forcing their own values on their local networks - they aren't going to give up just because technology doesn't work that way ... yet. Bottom line: laws like this are proliferating, governments see tech as a threat to their absolute power, and they have the rule of law to enforce compliance without being "fair". It's only a matter of time before the fire walls go up.
...but the AI
I do worry a little that there was talk that AI generated content (think Bing + ChatGPT) was clearly the platform's own speech. If a thousand monkeys typing randomly one day write something defamatory the Court is going to asplode.