Surely it needs to be shown that the government can effectively govern the infrastructure before it begins redefining the infrastructure to include everyone's living room and garage?
This! And it doesn't seem likely, does it? Observation suggests that, far from working together for a common good, government today is about divisiveness, extreme partisanship, jumping at populist issues without reference to the reality behind them, and ideally one-upping your opponents. Facts and expert advice (unless the "expert" advice is backed with a ton of money) seem to rarely make the top 10, and if different sides agree on an issue, it's most often because it's such a hot-button topic they can't disagree, which tend to make the worst laws of all.
Just as with those early climate political wars, advocates for reform are facing the weaponization of uncertainty as a tactic to resist government intervention, with the abuse of data and science and metrics to advocate for an outcome of inaction.
And now this has become almost an art-form. Disinformation, confusion, distortion and outright lying has not only become the norm in politics, it's by-and-large accepted as the norm. The latest US and UK elections are prime examples. Or, if you want something more relevant to society-altering issue-based politics, just take a look at the clusterf..k that was both sides of the Brexit campaign. My hopes for thoughtful internet regulation are not high.
We are all going to find out that the entire world is able to snoope on your internet browsing and a revolution will quickly react with unbreakable encrypted packets shifting back and forth with telltales to let you know if it has been opened by anyone other than the intended recipient.
Unfortunately, the former has been true for years, indeed decades, and few people care. Many people who do care are trying to go the other way and make it worse.
Suspension of respiration pending latter outcome is considered sill-advised...
Does the proposed new rule solve the problem and achieve the desired outcome? Does it balance problem reduction with other concerns, such as costs? Does it result in a fair distribution of the costs and benefits across segments of society? Is it legitimate, credible and, trustworthy? But, there should be an additional question: Does the regulation create any consequences for the Internet?
That sounds eminently sane, so why is it that observation suggests the questions lawmakers actually ask are more along the lines of:
Am I taking a wild swing at a hot-button issue of the moment?
Can I blame the tech sector for the problem and sound marginally plausible?
Will my wild-ass "solution" sound plausible to anyone as unfamiliar with the workings of the internet as I am?
More importantly, will the "solution" appeal to my chosen target demographic of voters?
Am I going to look good in the news grandstanding and railing about how those evil tech companies are being obstructive?
Kinda depends whether you're going for the political definition of bribe, which seems to limit the definition to "This specific pile of money for this specific service that is specifically prohibited...
Or the rather more dictionary definition:
dishonestly persuade (someone) to act in one's favour by a gift of money or other inducement.
... in which case, provision of favours, junkets, fund raisers, donations, etc.etc, which are done in the public eye and indeed in the public record are just as much "bribes", just "legal" ones - mostly because the people who receive them write the laws
"It's derivative and therefore protectable if it shares the faintest similarity of shape, form or colour, except where one could be easily mistaken for a direct evolution of the other when they are in fact completely different and not protected, except on a Tuesday where Mercury is in retrograde and the fourth aardvark from the left in the Beijing zoo is ill"
Call me cynical, but IP law - trademark or copyright - seems to generally come down on the side of the bigger chequebook rather than any kind of consistent logic.
A creative solution would mean the attorney would get paid for handing a legal problem back to the creators.
Ah, no... I think the point is it would mean the attorney, after handing said problem back for a creative solution, would then not get paid the dozens/hundreds of billable hours available for repeatedly haranguing and bullying the poor schmuck on the other end, starting with boiler-plate threats... Which would defeat the entire purpose of being an attorney, no?
Truly not. Algorithms can't understand context, so any automated moderation WILL have very high failure rates.
True enough, probably whether my idea worked or not, but that's kind of the point. Even if it's wrong, the worst you have done is flagged some competing opinions/facts/"facts" about something so the consequence of failure is significantly less.
And someone would have to write the algorithm and create a suitable index of links to use in response. That's...basically an AI you're looking for.
You could be right - I'll admit it's not my field of expertise - but it occurs to me that, for example, the existing Google algorithm is pretty good at ascertaining the prevailing world opinion/scientific evidence on many subjects already. The hard part may be teaching or giving it an idea of "legitimate sources" and worse who does that.
Is it perfect? Nope! Is it game-able? Yep. Would it still require a ton of human oversight and intervention? Hell yes. Would it be as huge a cluster-f**k as the current overt and very wrong censorship? Probably not. Again, you wouldn't be removing anything, just pointing out contradictory evidence.
Unfortunately doing nothing at all meads to the spread of misinformation.
I didn't suggest doing nothing at all; I suggested an automated pointing out of the contradictory and ideally scientific evidence to try to drown out the loud morons rather than making them louder by attempting to smother them.
When I see stuff like this, I can't help wondering whether the best route is to not remove anything at all, but instead do something completely different instead.
For example flagging anything that appears to be against the preponderance of evidence available , e.g. a big banner with something like: "This post appears to contain inaccurate information. Here are a number of links to reputable sources contradicting it"
That seems like something an algorithm could handle more accurately and even if not, the chilling effect of being wrong isn't so dire and the conspiracy theorists can't bitch so hard about censorship.
For example, the Telegraph article notes that full-genome sequencing of newborns means "parents could choose to be alerted to the fact their child faced heightened risks of specific diseases, and allow the NHS to offer more tailored treatment."
Maybe I'm just being cynical, but I assume the Telegraph's quote came directly from the health secretary's talking points and given the government's efforts over the last decade plus to turn the NHS into the same nightmare clusterf*ck as the US "healthcare" system, I read that as:
"Parents can easily have their child excluded or charged more for healthcare on the basis of 'pre-existing conditions'... Or at the very least, if we can't utterly destroy one of the most efficient healthcare systems in the world, it'll give us an excuse to drain some more money from it, not to mention the Home Secretary is drooling to get his hands on all that tracking data"
Fair use is a defense at trial, not anywhere near a right.
"Fair Use" as legally defined may be a defence at trial but, though you may not credit it, the vast majority of the human world does not use language according to the often skewed or constricted rules of the legal profession's use.
How about this, then:
"Use" of an owned object is a legal right that proceeded from an obvious natural right, which copyright law has consistently and counter-intuitively eroded.
You know, in addition to anyone in the government who had access to it… like criminals.
The modifier being dangled, I'll charitably (and unrealistically) assume you meant that criminals would have access to the data as well as the Government, rather than the criminals having access to the data being in the Government.
...whether or not this will actually "move the conversation forward. [snip] They have mostly only seemed interested in the [snip] approach to this, that assumes smart techies will give them their magic key without undermining everything else that keeps us secure.
I don't think this'll "move the conversation forward" either, because I also think the above assumption is wrong. It's politics now, and therefore basically an article of faith.
I think the problem it is that the (essentially) political calls for "the end of encryption" or "backdoors in encryption" who have no knowledge of how it actually works simply assume that "the other side" (i.e. technical experts) are lying about the consequences to block them because it's what they'd do in that position. The problem is that we have become fact-optional societies.
Methinks you are assigning too grand and Machiavellian a motive to basically power-hungry potato-heads. I would be surprised to find the motive is anything more thoughtful than shoring up political support.
I suspect the goal is to be seen to be "Doing Something" about an issue that's been blown up somewhere in the media because of one or more "Terrible Things That Have Happened".
Knee-jerk, reactionary law from morons to haven't a clue and don't care how the thing works as long as they can be seen to be "Doing Something About The Problem" to the kind of voter that also doesn't know how it works but "Cares Deeply About The Children And Stuff".
Actual harm is irrelevant here because by the time it manifests, some political rival will have had a hand in it and you can blame them or if not just falsify the figures and hold a "Major Press Conference" touting how well it worked. The clueless are appeased and you get elected again - everyone wins.
British Politics... the triumph of appearance over fact (have you not seen Brexit?)