I love how you've got such a hate-on for TD that you'll even defend blatant fraud so long as it's performed by 'your team'.
For all the cries about how terrible those blasted pirates are for 'stealing' from the artists you jumped at the chance to defend those doing just that with a subtle-as-a-sledgehammer-to-the-face 'But look at something other than that!, mixed in with the usual baseless accusations and personal attacks.
Such delightful and telling standards you've got there.
Initially, AEPI was reluctant to hand over the relevant documents to allow the audit to take place, but here's what has just emerged:
A better question might be why they're not audited on a regular basis by people where saying 'No' is a good way to get slapped with a massive fine and/or fired on the spot.
If the justification for their existence is to collect money for artists and give money to artists then any deviation from that undermines the justification for them to exist and should be harshly scrutinized and/or punished. As such regular if random checks of the books should be the norm, and any hesitation on the part of the agency to open the books should result in the one stonewalling being fired and the agency investigated by an independent party.
Uh, no. That was the entire point of codifying the idea of safe harbors into law, that sites can take steps to moderate content without suddenly becoming liable for content posted by users. Were what you are saying true then Youtube would have been better off not implementing ContentID and ignoring DMCA claims, and I rather doubt that's what you meant to imply.
'Voluntarily' implementing a (lousy) filter for one type of content and complying with the law does not magically change their status such that they are responsible for what's posted by others using their service, whether that content be copyright related or otherwise.
$16 is completely ridiculous, no argument here(unless it's a compilation of several books I suppose), but setting a limit of $1 strikes me as a bit too low. Given I can usually spend several hours reading a single book, $3-5 is my general price range when it comes to purchase, as that seems about right for a couple hours' worth of entertainment(any future re-reads are just a bonus)
It's possible some of them have seen a slight uptick in their cut, but I suspect(if anyone's got some evidence either way though feel free to chime in) that for most their contracts had them getting a set amount per book sold. If that's true then even if all costs were removed their cut still wouldn't go up any, and they'd still be getting the same amount per book/ebook sold/'sold'.
Plenty fall into that category, though you generally have to look for those not tied down by traditional publishers, as barring notable exceptions like Baen far too many of them seem to see digital format as an excuse to just pad out their profits by keeping the price the same(if not increasing it) while getting rid off all the costs associated with physical sales.
Even having to pre-screen 'problematic' videos would be a huge problem due to how many they'd have to deal with, and the massive numbers of false positives they've be wading through.
ContentID, something that's based upon a 'Does it contain content X or doesn't it?' already has problems a plenty flagging things for reasons ranging from absurd to downright broken. Now imagine a similar system but for 'offensive' content and the nightmare that would be.
If Google wants to manually review videos flagged by users as 'offensive/extremist', which I believe they already do, then I've no problem with that. What I have a problem with is requiring them to do so ahead of time, as it would be insanely expensive, cause significant collateral damage, and make the service vastly less useful as a hosting platform(all of which would be bad enough for a huge company like them, but would be even worse for smaller services trying to break into the market and who wouldn't have the same resources that YT/Google does).
Hosting user submitted videos, not posting. The distinction is significant, as it means that thanks to 230/common sense protections in the US at least they aren't held responsible for what their users post and as such have no requirement to pre-screen for CYOA reasons. With no requirement to pre-screen more videos isn't a scaling problem getting out of control because it was never a problem in the first place.
(Imagine if you will a donation-based library, where all the books are donated by others. Their 'business model' is to make sure that they have enough shelves to hold all the books and making sure that people can find what they want. So long as they can manage those two, no matter how many books are donated or how fast then they're doing good and their 'business model' is fine. Now imagine someone comes in and demands that they check every book for 'offensive' content before people can check it out. Now how much is donated is a problem, but that problem has nothing to do with their 'business model', and everything to do with the new requirement that's been dumped in their laps.)
The entertainment industry does make money from the content that they're filing DMCA claims for(assuming a valid notice anyway), and unlike user submitted content that YT makes money from hosting the DMCA contains a (effectively theoretical at this point) requirement to swear 'under penalty of perjury', which would require manual review.
As I noted above a DMCA claim is also easier to check, as the only subjective part involved is a consideration of fair use, which has a quick and easy 'checklist' attached, quite unlike the subjective 'is this offensive/extremist?' which, barring extreme cases(and sometimes not even then) can be much harder to decide on. Ask enough people and anything can be seen as 'offensive', so the question becomes 'how many people can we safely offend according to the requirements?'
There's also the difference in consequence, miss a 'guilty' copyright infringement case and the harm isn't likely to be very bad, whereas if a site is liable for user submitted content and they let an 'offensive/extremist' post through they're likely to be facing a serious penalty, which means they're much more likely to block even fringe stuff 'just in case', leading to large amounts of content and/or speech blocked.
In both cases a faulty claim means legitimate/legal content and/or speech being removed, and while services like YT don't have a requirement to screen content those sending out copyright claims (theoretically) do, so why is it you think that only the former group should be required to pre-screen?
As arguments go that's beyond ridiculous. Because anything created is under copyright whether you want it to be or not you should avoid creating anything if you don't agree with the idiocy that is the law?
There's a difference between copyright law, and things that happen to be covered by it(that being everything since the colossally stupid change to 'copyright by default'). Creating something that happens to be covered by the law because the one creating it has no choice in the matter is hardly 'subjecting' people to copyright, and you're seriously stretching to try and claim that it is.
Only if what's being scaled up is part of the business model being used, and not something they're being slapped with after the fact.
Were Google/YT in the 'pre-screening video content' business then yes, they would be to blame if they set things up such that they couldn't handle the increased load of what they had to go through, but since they're not it's not a 'business model problem' at all. Youtube hosts videos, that's it's business model, saying they should have to pre-screen everything first isn't a matter of scaling up something they've always had to do, it's adding something new on top of what they already do, something that the scale of the problem would make insanely expensive and bring the service to a crawling halt if they were required to do, contrary to your claims otherwise.
On a semi-related tangent, but your mention of how Google is big so it's not a problem has me again wondering, do you hold others to that same standard? Do you think that the movie and recording industries should likewise hire 'a room full of people' to personally vet every DMCA claim they send out to avoid false positives? They make billions too after all, surely it would be just as easy if not easier for them to pre-screen DMCA claims as it would be for Google/YT to pre-screen videos, so does that standard of yours apply to everyone, or just Google?
Funny thing about the example you went with even beyond the fact that 'steal' is still not the right term to use, unless the seat infringer takes the seat home with them when they leave. As you yourself note the 'seat infringer' might pay, not for the seat but perhaps food or something else. You've got their attention, and with that you still have a chance to get their money as well.
The person who 'does without' in that example though? The one who sees the price and decides that nah, that's not worth it, and as a virtuous upstanding citizen decide against the 'free version'? They aren't paying squat. They aren't paying for the seat, they aren't paying for the food, they're not paying for a shirt. They've moved on with their attention, meaning the chance of you getting any of their money has likewise gone.
The difference between someone engaged in copyright infringement and someone 'doing without' is that while neither are giving you money now, the latter is drastically less likely to give you any money in the future thanks to their attention having moved on to other things.
This is why I always find it funny when people respond to copyright infringement with 'If you don't like the terms/price do without!', as neither group is paying now, but the 'do without' group is even worse when it comes to possible future sales, making it a counter-productive argument.
What may be 'simple' for a few videos is anything but when you scale it up to that level, so you're not talking about 'a room full of people' but a massive system requiring various levels of review of enormous amounts of content.
There's also the problem of false positives, something that already plagues ContentID, a black or white 'Does X match Y?' system. Make the question a subjective one, 'Does X count as 'extremist' content?' and things would be even more insane.
Along those lines, the recording and movie industries also make billions in profits, which means they too can certainly afford to 'hire a room full of people' to review DMCA claims before sending them out to make sure that they don't flag something erroneously.
If they can't manage that, then perhaps their business models are broken.
I mean it's not like a star has ever been used for anything not related to communism, so of course any use of a star is commie related.
I gotta say, it's almost admirable how honest they are with their corruption. Flat out admits that the attempt to punish Heineken is related to the legal spat with a local brewery, like it's no big deal. If gross corruption was a trait to be proud of they'd have a lot to be proud of.
Given they would have been lying either way, I'm not sure how effective their damage control would have been had they been more proactive.
Sure they would have been able to get their version/lies out first, allowing them to better shape the narrative, but so long as the leaks still happened they still would have been dealing with evidence that contradicted what they were saying, as I don't imagine they would have been any more honest going first rather than reacting. The leaks were bad enough PR for them on their own, but their response to the leaks, doubling down and getting caught out on lie after lie after lie really did a number on their credibility, and I don't see that changing either way it went down.
The extracted data includes irrelevant personal information, prosecutors said, so they’re seeking an order from the court that would prohibit defense lawyers from copying or sharing information unless it’s relevant to defend their client.
Not mentioned: Any steps taken to keep the police from looking through that 'irrelevant personal information', but hey, I'm sure they're being very careful to narrow the scope of their searches and immediately deleting any data unrelated to the investigations that they stumble upon on accident during their narrow searches.
As for the bulk trials, if the judge gives that abomination of the legal system a pass they might as well resign on the spot and just let the prosecution hand out convictions as desired, because allowing something like that will have indicated that they don't give a damn about even the most basic tenants of the legal system, 'Innocent until proven guilty in a court of law', where the prosecution has to demonstrate guilt of the accused, not just guilt of the group the accused might have been lumped into.