YouTube Filters At It Again: Pokemon YouTubers Have Accounts Nuked Over Child Porn Discussions That Weren't Occurring

from the filter-fail dept

It's clear at this point that the automated filtering and flagging done by YouTube is simply not good. Whatever legitimacy the platform might want to have by touting its successes must certainly be diminished by the repeated cases of YouTube flagging videos for infringing content that isn't infringing and the fact that the whole setup has been successfully repurposed by blackmailers that hold accounts hostage through strikes.

While most of these failures center around ContentID's inability to discern actual intellectual property infringement and its avenues for abuse, YouTube's algorithms can't even suss out more grave occurrences, such as child exploitation. This became apparent recently when multiple Pokemon streamers had their accounts nuked due to discussions about child pornography that never occurred.

A trio of popular Pokemon YouTubers were among the accounts wrongly banned by Google over the weekend for being involved in “activity that sexualises minors”.

As the BBC report, Mystic7, Trainer Tips and Marksman all found their accounts removed not long after uploading footage of themselves playing Pokemon GO.

It’s believed the error occurred thanks to their video’s continued use of the term “CP”, which in Pokemon GO refers to “Combat Points”, but which YouTube’s algorithm assumed was “Child Pornography”.

That's pretty stupid and it certainly seems like the reliance for a ban of an entire Google account based on the use of an acronym ought to have come with a review from an actual human being. That human would have immediately understood the context of the use of "CP" in a way the automated system apparently could not. And, to be clear, this wasn't a YouTube ban. It was the elimination of each streamers entire Google account, email and all.

Now, once the backlash ensued, Google got them their accounts back, but that simply isn't good enough. As there is more and more pressure to ramp up automated policing of the internet, at some point, everyone pushing for those solutions needs to realize that the technology just isn't any good.

Filed Under: automatic filters, automation, child porn, cp, false positives, pokemon, youtube filters
Companies: youtube


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 22 Feb 2019 @ 3:51pm

    These gamers need to stop referring to Canadian Pacific and cerebral palsey right now! Let the banhammers fall.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 3:57pm

    Candied Pickle enthusiasts must be sweating vinegar.

    reply to this | link to this | view in chronology ]

  • icon
    Matthew Cline (profile), 22 Feb 2019 @ 3:59pm

    So if you wanted to use YouTube to discuss, say, fighting child porn, to avoid an auto-ban you'd have to refer to it as "you know what" like you were a Harry Potter character not daring to directly refer to Voldemort?

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 4:41pm

      Re:

      You joke about that, but this is a thing that has really happened. Back in the day, when AOL and CompuServe ruled the consumer Internet, one of them (don't remember which) decided to implement vulgarity filters on their chatrooms, banning a bunch of obscene and naughty words, including discussion of breasts and other, more sophomoric euphemisms for breasts.

      This made it extremely awkward for medical professionals attempting to use online services to discuss breast cancer research!

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 23 Feb 2019 @ 10:26am

        Re: Re:

        AOL even had "Guides" who would monitor chats (in exchange for free membership), but a wage lawsuit stopped that.

        On AOL at least, there were areas where people could speak freely, even a free-speech chatroom which was set up over threatened litigation.

        reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 24 Feb 2019 @ 12:35pm

        Re: Re:

        It was worse than that - child molestation survivor support chats were being banned and filtered out. And this was just over their naming and not say context free analysis of raw accounts of their experiences being raped. The later would still be bad but it would be more understandable as 'inappropriate' even if it has an important therapeutic role.

        The very children who they were claiming to try to protect with their censorship.

        reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 5:10pm

      Re:

      "CP" is already a way to avoid a direct reference to "child pornography", so you'll have to stay ahead of the filter—it's the euphemism treadmill, at Internet speed. Let's hope Google doesn't look at past messages in these decisions; otherwise trolls will have all kinds of fun targeting people, by causing Google's AI to think messages written long ago refer to banned topics.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 24 Feb 2019 @ 12:37pm

        Re: Re:

        That is old hat with the infamous /pol/ and trying to associate tech company names with racial slurs. In practice it was very easy to tell who the nazi-shitheads were because nobody refers to Google in plural - googols yes.

        reply to this | link to this | view in chronology ]

  • icon
    Phoenix84 (profile), 22 Feb 2019 @ 4:00pm

    That's racist. CP Time is for the culture.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 4:07pm

    Getting the accounts back is fine. Nothing says that this has to be done by bots. That's what Google chose to do, and if it runs afoul of something, well, that's a problem with their business model that needs to be fixed.

    YouTube has a big problem with pedophiles latching onto videos of young children in the comments section (like this should surprise anyone).

    This comes down to the internet versus copyright, and which one should trump the other.

    reply to this | link to this | view in chronology ]

    • icon
      Matthew Cline (profile), 22 Feb 2019 @ 4:10pm

      Re:

      This comes down to the internet versus copyright,

      ????

      What in the world does this have to do with copyright?

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 22 Feb 2019 @ 4:30pm

        Re: Re:

        He's indirectly attacking Article 11/13, SOPA, etc. or any system of automatic filtering.

        If automatic filtering doesn't work, Google built its business wrong and some other business will get it right, either with better bots or humans.

        reply to this | link to this | view in chronology ]

        • icon
          Matthew Cline (profile), 22 Feb 2019 @ 5:00pm

          Re: Re: Re:

          He's indirectly attacking Article 11/13, SOPA, etc. or any system of automatic filtering.

          It's not possible for him to care about this in-and-of itself?

          reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Feb 2019 @ 5:04pm

          Re: Re: Re:

          Like all those IP address harvesters that keep finding the wrong people but you still insist on using? Sure... squander what little credibility you had left.

          reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 25 Feb 2019 @ 2:42pm

          Re: Re: Re:

          He's indirectly attacking Article 11/13, SOPA, etc. or any system of automatic filtering.

          That's some serious projection there. But even if true, so? He's not wrong.

          If automatic filtering doesn't work, Google built its business wrong and some other business will get it right, either with better bots or humans.

          Then you don't understand how automatic filtering works, or how bad it is at understanding context. All the filter cares about is "does this content contain the tags/IDs/keywords I'm programmed to look for and block?". If no, it ignores it, if yes, it takes it down. But as seen here, it was programmed to look for the acronym "CP", not understanding that there are many other contexts that the acronym "CP" could be used in that in no way relates to the content it is looking for.

          And you're never going to fix that because computers and software as a whole are equally bad at context. They are good at concrete facts that can be asserted as unambiguously true or false. They don't do "true but unrelated" or "false but still counts".

          reply to this | link to this | view in chronology ]

          • icon
            PaulT (profile), 25 Feb 2019 @ 10:55pm

            Re: Re: Re: Re:

            https://en.wikipedia.org/wiki/Scunthorpe_problem

            Exactly, they will never be as good as a human at understanding meaning, yet humans can never be as quick as a computer. There will always be mistakes, and if YouTube are given the option between losing Pokemon fans for silly mistakes by overcensoring and losing advertisers for not censoring enough, they'll favour the people actually paying them. This should surprise nobody, but if you need to blame someone blame the legislators who seem to think that magic wands can catch these things and the dinosaurs who hired them.

            reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 4:12pm

      Re:

      Aside from the life-destroying accusation and sudden deprivation of Google services, people generally are unable to appeal decisions like these unless there is a lot of backlash. This is in part due to Google's enormous size - they are unable to scale manual inspections of videos like these.

      reply to this | link to this | view in chronology ]

      • identicon
        Rekrul, 22 Feb 2019 @ 5:13pm

        Re: Re:

        Aside from the life-destroying accusation and sudden deprivation of Google services, people generally are unable to appeal decisions like these unless there is a lot of backlash. This is in part due to Google's enormous size - they are unable to scale manual inspections of videos like these.

        Yeah, after all, it's not as if a company that makes multibillion dollar profits off the backs of users could be expected to actually hire people to provide customer service to those people!

        What's next? Will people expect to able to be to call a company like Comcast, AT&T or VISA and actually speak to a human being? The ridiculousness of such an idea boggles the mind...

        reply to this | link to this | view in chronology ]

        • icon
          Toom1275 (profile), 22 Feb 2019 @ 6:07pm

          Re: Re: Re:

          Just how cheap do you think hiring 75,000+ people to do nothing but watch new Youtube uploads 8/5 would be?

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 23 Feb 2019 @ 10:27am

            Re: Re: Re: Re:

            If they can't afford to monitor their content to ensure it complies with the law, then their business model is obsolete.

            reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 23 Feb 2019 @ 11:12am

              Re: Re: Re: Re: Re:

              If they cannot monitor he content, what makes you think that lots of smaller sites could do any better? Also, why should they act as police to ensure that their users comply with the laws of the country that they live in?

              reply to this | link to this | view in chronology ]

              • icon
                Toom1275 (profile), 23 Feb 2019 @ 3:53pm

                Re: Re: Re: Re: Re: Re:

                Also, why should they act as police to ensure that their users comply with the laws of the country that they live in?

                In the case of copyright, at least, they shouldn't. It's neither their requirement nor responsibility.

                reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 25 Feb 2019 @ 2:58pm

              Re: Re: Re: Re: Re:

              They can monitor their own content just fine. It's the terabytes and petabytes and more of user generated content that is impossible to monitor by a reasonable amount of human beings. More content is being uploaded by users to Youtube in a single minute than a single human being could watch in two weeks.

              Especially in the US, the law says platforms aren't responsible for the actions or content of their users. If you don't like it, well, too bad for you.

              reply to this | link to this | view in chronology ]

            • icon
              PaulT (profile), 25 Feb 2019 @ 10:51pm

              Re: Re: Re: Re: Re:

              "If they can't afford to monitor their content"

              That's the point - it isn't their content.

              reply to this | link to this | view in chronology ]

          • identicon
            Rekrul, 25 Feb 2019 @ 11:55am

            Re: Re: Re: Re:

            Just how cheap do you think hiring 75,000+ people to do nothing but watch new Youtube uploads 8/5 would be?

            I never suggested that they should hire people to watch every video that gets uploaded. I suggested that they should hire people to provide customer service so that when their system incorrectly flags a video, or someone files a false copyright strike against someone's account, they can actually contact a live human and get them to look into the situation.

            reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 25 Feb 2019 @ 2:58pm

              Re: Re: Re: Re: Re:

              So, there is this video of someone playing a song and the person looking into it has not heard the song before. How are they meant to determine whether the uploader or the company making the claim is the copyright holder?

              How in general is a low paid employee meant to determine who the actual copyright holder is, or where the boundaries of fair use are?

              reply to this | link to this | view in chronology ]

            • icon
              PaulT (profile), 25 Feb 2019 @ 10:49pm

              Re: Re: Re: Re: Re:

              Which, at the rate that the **AAs are sending takedowns will still require a lot of people to be hired, especially as they would then presumably be required to take into accounts things like fair use and the identity of the uploader rather than just the content of the video. Each investigation therefore taking far, far longer than the automated one, since if they had a human just do the same cursory inspection they would more liable for legal comeback than if they just claim an error in an algorithm.

              I know what you're saying, but it's the volume that's the problem and introducing humans into the mix will make things worse not better.

              reply to this | link to this | view in chronology ]

              • identicon
                Rekrul, 26 Feb 2019 @ 12:09pm

                Re: Re: Re: Re: Re: Re:

                Relying on automation to run your company with virtually no human oversight is a crappy way to do business. They want the benefits of running a huge platform full of user generated content, but they can't be bothered to make sure that their platform doesn't screw over the users.

                Would you use a vending machine if you knew there was a chance it would just take your money and not give you what you requested, and if that happened, there was nobody to contact to get your money back?

                What if some piece of construction equipment accidentally created a huge pothole in the road, right at the end of your driveway and there was no way to contact the city and get it fixed?

                What if the post office decided that you were some type of scammer and removed your address from their system so that you would no longer receive your mail and there was no appeal process?

                reply to this | link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 26 Feb 2019 @ 1:52pm

                  Re: Re: Re: Re: Re: Re: Re:

                  Relying on automation to run your company with virtually no human oversight

                  Asserts facts not in evidence. There is absolutely human oversight on these things as noted by many news articles and reports. The problem is not oversight or no oversight, the problem is the automation is just bad at what it's being asked to do. And this is true for all computers and software. They are being asked to determine context and they just can't do that effectively.

                  is a crappy way to do business.

                  No, it's a new way to do business. A comparable analogy would be car factories where cars are built mostly by automated robot. The process has human oversight, but mostly just to make sure the robots don't run wild. Finished cars are spot checked to insure quality but that's it. It's not a bad way to do business, just different and somewhat new. Most businesses today wouldn't be in business if they didn't rely on some sort of automation, be it software applications, robotics, or otherwise.

                  They want the benefits of running a huge platform full of user generated content, but they can't be bothered to make sure that their platform doesn't screw over the users.

                  Well, human oversight is technically what screwed over the users in this case. They made a change and it impacted innocent users. Technically the system was working fine before it.

                  Would you use a vending machine if you knew there was a chance it would just take your money and not give you what you requested, and if that happened, there was nobody to contact to get your money back?

                  Not even remotely similar to what's going on here. And there is someone to contact in this case to get resolution, as evidenced that they all got resolution and got their stuff back.

                  What if some piece of construction equipment accidentally created a huge pothole in the road, right at the end of your driveway and there was no way to contact the city and get it fixed?

                  Again, there was someone to contact, they did, and stuff got fixed. The fact that the people they contacted weren't willing to listen until there was a big backlash is a completely separate issue.

                  What if the post office decided that you were some type of scammer and removed your address from their system so that you would no longer receive your mail and there was no appeal process?

                  Again, same as above. Your analogies are not the same and not relevant.

                  reply to this | link to this | view in chronology ]

                • icon
                  PaulT (profile), 27 Feb 2019 @ 12:25am

                  Re: Re: Re: Re: Re: Re: Re:

                  Nobody's arguing ZERO oversight, they're saying that the kind of intervention you're talking about will not scale.

                  If you're going to argue, with people, at least be honest about what the opposing argument is.

                  reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Feb 2019 @ 7:26pm

          Re: Re: Re:

          Arent these free services? If so, I dont see why they should have to invest to monitor every tom dick and harry.

          reply to this | link to this | view in chronology ]

          • identicon
            Rekrul, 25 Feb 2019 @ 12:01pm

            Re: Re: Re: Re:

            Arent these free services? If so, I dont see why they should have to invest to monitor every tom dick and harry.

            They are free for people to use, but they are how Google makes money. They want people to use their services, but when something goes wrong and one or more of those users gets screwed, they don't want to be bothered trying to fix the problem.

            They want their money making service to run on autopilot so that they don't actually have to make any effort to deal with the people who are helping them make money.

            reply to this | link to this | view in chronology ]

            • icon
              PaulT (profile), 25 Feb 2019 @ 10:50pm

              Re: Re: Re: Re: Re:

              Those people are free to use another service if they have a problem. In fact, if more of them did so instead of whining that the service they decided to use exclusively isn't treating them like princesses, there's be more larger competitors anyway.

              reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 23 Feb 2019 @ 9:13am

          Re: Re: Re:

          Will people expect to able to be to call a company like Comcast, AT&T or VISA and actually speak to a human being?

          Will people expect to pay Google as much as they pay those companies? Funding a call centre is expensive, and built into bill and fees.

          reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 22 Feb 2019 @ 5:17pm

        Re: Re:

        they are unable to scale manual inspections of videos like these.

        It is not possible to manually monitor all conversations that take place where the public can gather, like pubs, clubs, cafes and restaurants, so why should all online conversations be monitored?

        reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Feb 2019 @ 7:38pm

          Re: Re: Re:

          Online moderation is possible, and it's definitely possible for a bot to simply flag a suspected infringing video for human review than to just nuke it. Might cost a bit more, but so does not giving plastic bags at supermarkets.

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 23 Feb 2019 @ 7:20am

            Re: Re: Re: Re:

            and it's definitely possible for a bot to simply flag a suspected infringing video for human review than to just nuke it.

            The problem with that idea is that the human side does not scale well when the platform is open, indeed that is the very problem being discussed here.

            reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 24 Feb 2019 @ 1:27am

            Re: Re: Re: Re:

            So you are saying because it is sort of possible it should be done. That is not a good answer as to why human conversations should be controlled. It also raises the question of who decides what is and isn't allowed.

            reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Feb 2019 @ 9:34pm

          Re: Re: Re:

          I'm pretty sure there is some government looking to install Alexa in every pub

          reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 4:22pm

      Re:

      This comes down to the internet versus copyright, and which one should trump the other.

      Copyright more important than user rights. Got it.

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 6:01pm

      Re:

      Nothing says that this has to be done by bots.

      Nothing, of course... except for sheer practicality and issues of scale.

      Viacom, for one, insisted that videos they uploaded themselves infringed on their own copyright.

      If even the original copyright holder can't tell whether something uploaded violates their own copyright then what fucking chance does another human have?

      reply to this | link to this | view in chronology ]

      • icon
        Anonymous Anonymous Coward (profile), 22 Feb 2019 @ 6:14pm

        Re: Re:

        Except those videos did not violate Viacom's copyright, since they were the ones uploading those videos. The problem was that whatever said (was it Content ID) it was a violation did not have all the facts, including the one that the rights holder was the uploader.

        Of course, if there was a database of all copyrighted material linked to the actual rights holders, and the uploaders were identified as the rights holders then this wouldn't be an issue. So, where are those databases and the crosschecking software that denotes A=A?

        reply to this | link to this | view in chronology ]

        • icon
          That One Guy (profile), 22 Feb 2019 @ 6:32pm

          Re: Re: Re:

          Of course, if there was a database of all copyrighted material linked to the actual rights holders, and the uploaders were identified as the rights holders then this wouldn't be an issue.

          So long as you ignore that pesky 'fair use' bit, sure. Even if there was such a database, that accurately listed every copyrighted work, who owns it, who has a license to use it and how, you'd still be stuck with the problem of fair use providing false positives as people who didn't own a work and didn't have a license to use it would still be able to legally do so.

          reply to this | link to this | view in chronology ]

          • icon
            Anonymous Anonymous Coward (profile), 22 Feb 2019 @ 6:47pm

            Re: Re: Re: Re:

            So true, but we cannot figure out how to do that second step without any ability to do the first. Having some legitimate review process (see suggestions below) would go a long way, to start.

            reply to this | link to this | view in chronology ]

            • icon
              That One Guy (profile), 22 Feb 2019 @ 8:27pm

              Re: Re: Re: Re: Re:

              Oh no question, being able to at least make sure that only the owner[1] could make claims would cut down on a lot of bogus or outright fraudulent claims, assuming the system was accurate(false positives are of course a given, though it wouldn't take much to get better than current), which would drastically lower the number of cases which would even reach the 'now, is it fair use?' point, so on those grounds I could certainly see your suggestion being a good idea, if perhaps not exactly viable at the moment, given the sheer scope of works you'd be talking about and the... less than ideal filter systems we currently have.

              [1]Or those legally under contract to make claims on their behalf, which would, ideally, not shield the original owner from liability should the second party screw up, to motivate accuracy in who they hire rather than just 'who can file the most claims?'

              reply to this | link to this | view in chronology ]

        • icon
          PaulT (profile), 25 Feb 2019 @ 1:00am

          Re: Re: Re:

          "Except those videos did not violate Viacom's copyright, since they were the ones uploading those videos. The problem was that whatever said (was it Content ID) i"

          Lying, or just wrong, yet again?

          Those videos had nothing to do with Content ID, they were named in the lawsuit by Viacom, and then removed when it was noticed that they had actually uploaded them.

          "So, where are those databases and the crosschecking software that denotes A=A?"

          It's not possible for that to exist, unless you wish to go back to the days where people have to register for each copyright they wish to hold rather than it being automatically assigned upon creation. Companies like Google would love that, because it means that they are not held to any standard beyond "is this a match in the database", rather than the wooly shit they have to deal with now.

          reply to this | link to this | view in chronology ]

      • This comment has been flagged by the community. Click here to show it
        identicon
        Anonymous Coward, 22 Feb 2019 @ 7:37pm

        Re: Re:

        Scale is not the law's problem. Bots are an attempt to cut corners that doesn't work. Human review IS possible.

        reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Feb 2019 @ 8:51pm

          Re: Re: Re:

          So get humans to review all those settlement letters you send out.

          Oh, that's right. Not only do you not, you refuse to. Because suing grandmothers is so much more lucrative.

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 23 Feb 2019 @ 12:39am

            Re: Re: Re: Re:

            I don't send out settlement letters. I did report mass-piracy, which was harming my revenue, but individual cases don't really bother me. I just wish the laws would make it impossible for pirates to operate. As of now, it doesn't.

            reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 23 Feb 2019 @ 1:39am

              Re: Re: Re: Re: Re:

              Except that that's exactly who your enforcement targets - individuals who are least likely to be capable, in terms of either legal knowledge or money, or both, of defending themselves in court.

              How many dead people did you exhume just for your porn money?

              reply to this | link to this | view in chronology ]

              • This comment has been flagged by the community. Click here to show it
                identicon
                Anonymous Coward, 23 Feb 2019 @ 10:31am

                Re: Re: Re: Re: Re: Re:

                It wasn't porn, and it wasn't individuals who were sent a notice by me. It was a large, mass-piracy website which had changed the title, author's name, and cover art so that people didn't even realize it was my work.

                I'm not a copyright troll, just a supporter of strong enforcement so that I don't have to send DMCA notices in the first place.

                reply to this | link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 23 Feb 2019 @ 11:15am

                  Re: Re: Re: Re: Re: Re: Re:

                  Was the contents word for word identical with what you wrote as well?

                  Also, citation needed.

                  reply to this | link to this | view in chronology ]

                  • identicon
                    Anonymous Coward, 23 Feb 2019 @ 3:18pm

                    Re: Re: Re: Re: Re: Re: Re: Re:

                    Old Jhon boy isn’t just allergic to citations. He’s a self admitted scammer and spammer.

                    reply to this | link to this | view in chronology ]

        • icon
          Toom1275 (profile), 22 Feb 2019 @ 8:57pm

          Re: Re: Re:

          [citation still needed]

          reply to this | link to this | view in chronology ]

          • icon
            That One Guy (profile), 22 Feb 2019 @ 11:29pm

            'Technically right', the best kind of right

            No, they're totally right, human review of the content uploaded to YT is possible... in the same way that it is possible to move the Sahara desert to the north pole, one grain at a time.

            reply to this | link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 23 Feb 2019 @ 4:55am

          Human review IS possible.

          Possible? Yes. But the practicalities of human review of every upload to YouTube come from the realm of the absurd. The costs of hiring enough people to review the hours upon hours of video uploaded every hour would, on their own, be ridiculously high.

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 23 Feb 2019 @ 5:41am

            Re:

            No one is calling for that, moron. Nice strawman, though.

            Objectionable content is a small minority of "all content on YouTube," and human review of material that gets flagged is possible.

            Let the algorithms run and try to find things that are bad. Nothing wrong with that. What is wrong is letting an algorithm that lacks the competence to make the final judgment actually make that final judgment. Pass that along to a moderator and problems like these go away.

            reply to this | link to this | view in chronology ]

            • icon
              Stephen T. Stone (profile), 23 Feb 2019 @ 9:54am

              Re: Re:

              human review of material that gets flagged is possible.

              And if you think that number is vanishingly low, you’ve probably never heard of report-bombing.

              reply to this | link to this | view in chronology ]

              • icon
                Wolfie0827 (profile), 23 Feb 2019 @ 10:26am

                Re: Re: Re:

                But with the human review, report bombing will more easily be detected and then the bombers can be banned through some method (Haven't thought on the method enough to have an idea of what might work, maybe nothing would completely, but something might come up that would greatly reduce it.)

                reply to this | link to this | view in chronology ]

                • icon
                  Stephen T. Stone (profile), 23 Feb 2019 @ 11:50am

                  Re: Re: Re: Re:

                  Question: How can someone tell the difference between an “uncoördinated” report bombing (where people are reporting a video independently of each other because it breaks the TOS) and a “coördinated” report bombing (where people are reporting a video as part of a pre-arranged campaign to attack the person who posted the video)?

                  reply to this | link to this | view in chronology ]

            • icon
              Toom1275 (profile), 23 Feb 2019 @ 3:56pm

              Re: Re:

              That still glosses over the problems the flagging and automation doesn't find.

              reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 24 Feb 2019 @ 10:44pm

              Re: Re:

              Objectionable content is a small minority of "all content on YouTube,"

              Not according to the RIAA. And given that many copyright holders would rather fair use did not exist, under the assumption that fair use is not a defense against getting flagged, the amount of content considered objectionable under rightsholder definitions would increase immensely.

              human review of material that gets flagged is possible

              See above. Rightsholders regularly complain that the algorithms let content slip through the cracks anyway.

              What is wrong is letting an algorithm that lacks the competence to make the final judgment actually make that final judgment

              This is exactly what rightsholders have been demanding: an algorithm that doesn't require human review to wipe target videos off the site. It's called "notice and staydown", you've probably heard of it.

              Pass that along to a moderator and problems like these go away.

              Here's the thing: such a system already exists! If it didn't, HBO's own site would have been nuked from orbit after HBO flagged it as a pirate site.

              Eventually Google is going to get tired of putting up with and burning resources trying to program around the stupidity of copyright enforcement.

              reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 23 Feb 2019 @ 10:31am

            Re:

            Even if that were the case, that would show the bot-based business model to be obsolete.

            reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 4:44pm

    So what sort of idiot set up these systems without realizing that acronyms can have multiple meanings, particularly in specialized jargon?

    I work in the medical industry, and it's not uncommon to have a discussion in which the rather unlikely combination of "D&D" and "MAGA" comes up, and neither of those means what an outsider might think they mean.

    reply to this | link to this | view in chronology ]

  • identicon
    Max, 22 Feb 2019 @ 4:47pm

    It would certainly complete this article mentioning that the whole hullabaloo started with a certain Youtuber publishing an appropriately enraged video about how Youtube keeps serving you seemingly innocent videos of kids conveniently annotated (in the comments) to semi-erotic timestamps, accidentally showing something kinda-sorta relevant to a paedophile. Allegedly one gets a lot of these suggested once you "train" the Youtube algorithm starting with the "right" search term and by repeatedly following the "right" suggested material. The ensuing nuclear blast on anything remotely resembling the term "C P" in any sense is merely collateral damage...

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 4:58pm

      Re:

      Ugh, here we go again.

      Louie, Louie! Oh, oh! We gotta go!

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Feb 2019 @ 5:35pm

      Re:

      Because moderating the comments is hard.

      reply to this | link to this | view in chronology ]

    • icon
      Anonymous Anonymous Coward (profile), 22 Feb 2019 @ 6:08pm

      Re:

      Isn't the problem really that Content ID is not learning from its mistakes? One would think that each time Content ID makes an error, someone, or something (machine learning) would make some correction to the Content ID algorithm, which would lead someone else to believe that it would be getting better.

      It isn't. So there is no someone, or something making corrections to the Content ID algorithm. Shame on Google. Then again, those corrections might make Content ID worse. Still shame on Google. That each and every request for review isn't sent to some third party with no vested interests to determine, at least initially (there could and should be follow up proceedings in courts of law), whether something nefarious has taken place is yet another, shame on Google.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 25 Feb 2019 @ 5:09pm

        Re: Re:

        You can't just have a single binary training model for 'is this covered by copyright or not'. It really doesn't work that way at all.

        reply to this | link to this | view in chronology ]

        • icon
          PaulT (profile), 25 Feb 2019 @ 11:59pm

          Re: Re: Re:

          Exactly. Anyone who thinks this is an easy task doesn't understand the problem. Anyone who thinks that ContentID should or even can be perfect doesn't understand how much of a mess copyright law is, and how much is covered by context that Google doesn't have direct access to or cannot fully determine (for example, fair use which is both a defence to standard copyright claims and highly subjective in its application).

          If it were merely a case that there's a list of protected works and Google had to match against that list, this would not be a problem. But, it's massively more complex than that, and people would rather blame Google than admit the entire copyright system is massively flawed at a fundamental level.

          reply to this | link to this | view in chronology ]

    • identicon
      Rog S., 23 Feb 2019 @ 11:22am

      Re: Youtube pulls 400 channels

      The guy is Matt Watson, who was confronting Youtubes biased demonetization

      https://www.cbsnews.com/news/youtube-axes-more-than-400-channels-over-child-exploitation-controvers y/

      reply to this | link to this | view in chronology ]

  • icon
    Mononymous Tim (profile), 22 Feb 2019 @ 4:55pm

    The stupidity involved in treating 2 letters together as meaning only one (terrible in this instance) thing hurts my brain.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 5:03pm

    Obvious based on article

    My understanding is that the people actually discussing child porn must be doing it in terms that mimic pokemon enthusiasts terms and sayings. That is the only thing that would make sense based on the information provided.

    reply to this | link to this | view in chronology ]

  • identicon
    Rekrul, 22 Feb 2019 @ 5:18pm

    Canadian Pacific

    Central Pacific

    Content Protection

    Corporal Punishment

    Not to mention that people often refer to movies and TV shows by their initials. I can't think of any at the moment, but I'm sure they exist with the initials C.P.

    reply to this | link to this | view in chronology ]

  • icon
    That One Guy (profile), 22 Feb 2019 @ 5:23pm

    'This time'

    Now, once the backlash ensued, Google got them their accounts back, but that simply isn't good enough.

    Especially when you consider that:

    A) Pokemon is kinda popular, with lots of people talking about it.

    and

    B) Talking about Pokemon has a good chance of mentioning 'CP', it being part of the game.

    These three got their accounts back after the matter went public, how many people were given the axe, and will be given the axe, who won't be so lucky?

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 5:31pm

    If these youtubers...

    If these youtubers just realised that the consequence of discussing obvious codewords like cheese pizza is naturally gonna result in youtube sending gun-toting maniacs round to their place to free the kids from their nonexistent basement as part of their new automated fostaID system, they'd not get into this kind of trouble!

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 22 Feb 2019 @ 5:53pm

    I wonder how Chris Prat and Carey Price (to name only two) feel about this....

    reply to this | link to this | view in chronology ]

  • icon
    That Anonymous Coward (profile), 22 Feb 2019 @ 8:22pm

    Google - Confirming once again we will never have AI overlords... because we can't code intelligence.

    reply to this | link to this | view in chronology ]

  • identicon
    Bruce C., 22 Feb 2019 @ 9:44pm

    Speculating: Canadian railfan channels are next...Canadian Pacific videos to be banned shortly.

    reply to this | link to this | view in chronology ]

  • identicon
    d, 23 Feb 2019 @ 10:14am

    EU, anyone?

    Is this a sign of things to come, regarding the proposed stupid EU copyright regulations? It soon won't be worth actually having the internet at all at this rate, which will, no doubt, please our favourite (not) trilling troll no end, I would imagine.

    reply to this | link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Rog S., 23 Feb 2019 @ 10:28am

    its corporate feminism 101

    Empowering Women and Girls isnt just a motto...it has real world activity associated with it.

    Women arent generally as hot and bothered by this as men are, and, it is primarily women raising, and encouraging these girls in the videos, as well as encouraging such entreprenuerial endeavors.

    Its men (and shitloads of neocons and gender lesbians) who view these videos and find them steamy, sexual, and offensive.

    Its amazing that in the year 2019, the mere sight of young girls doing what young girls do can still make (men, neocons, and gender lesbians) apoplectic, and calling for censorship.

    This guy Matt Watson is nearly Busting a Nut of Fury over it, he was so disturbed by viewing these girls:

    https://m.youtube.com/watch?v=O13G5A5w5P0

    Even more hilarious that we tolerate men ranting and raving over being demonetized by corporations who on one hand “empower women and girls,” and on the other hand....never mind....

    I wonder what Freud would have to say about transference, and projection in these absurdist internet dramas?

    Oh, yeah, thats right: internet influence operations and mass mind control are a "conspiracy theory. ”

    reply to this | link to this | view in chronology ]

    • This comment has been flagged by the community. Click here to show it
      identicon
      Rog S., 23 Feb 2019 @ 7:45pm

      Re: its corporate feminism 101

      LOL: Techdirt -a safe space for people whose critical thinking skills cause them pain.

      reply to this | link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    SA Church Lady, 23 Feb 2019 @ 7:51pm

    A primer in monetized feminism

    Re: empowering Women and Girls isnt just a motto...it has real world activity associated with it.

    In feminist theory /Barthes styled deconstruction, they focus on the issue of “the gaze ”if the viewer.

    Male gaze v female gaze are purportedly different, with male gaze being deviant, and prurient, and female gaze always pure, and wholesome.

    Even a cursory peek at feminist literature reveals that neither is true.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 24 Feb 2019 @ 5:51am

    The technology will never be good

    AI research isn't the magic cure-all that people think it is. We will never develop an AI that's as 'smart' as a human, nor one that can appropriately parse natural human language.

    reply to this | link to this | view in chronology ]

  • identicon
    Jen Derlez PhD, 24 Feb 2019 @ 7:00pm

    its corporate feminism 101

    Empowering Women and Girls isnt just a motto...it has real world activity associated with it.

    Women arent generally as hot and bothered by this as men are, and, it is primarily women raising, and encouraging these girls in the videos, as well as encouraging such entreprenuerial endeavors.

    Its men (and shitloads of neocons and gender lesbians) who view these videos and find them steamy, sexual, and offensive.

    Its amazing that in the year 2019, the mere sight of young girls doing what young girls do can still make (men, neocons, and gender lesbians) apoplectic, and calling for censorship.

    This guy Matt Watson is nearly Busting a Nut of Fury over it, he was so disturbed by viewing these girls:

    https://m.youtube.com/watch?v=O13G5A5w5P0

    Even more hilarious that we tolerate men ranting and raving over being demonetized by corporations who on one hand “empower women and girls,” and on the other hand....never mind....

    I wonder what Freud would have to say about transference, and projection in these absurdist internet dramas?

    Oh, yeah, thats right: internet influence operations and mass mind control are a "conspiracy theory. ”

    reply to this | link to this | view in chronology ]

  • identicon
    Annonymouse, 25 Feb 2019 @ 7:12am

    The questions here are; were the takedowns vetted, will they be vetted in the future, did the boss take responsibility, will this be thrown in the face of the idiots writting the laws, can the youtubers get their followers to make these lawmakers lives a living hell.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 25 Feb 2019 @ 11:47am

    Meanwhile, advertisers are pulling out of youtube because of things that aren't being caught by their filters. So the impetus on youtube, financially speaking, is to take down more, not less.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 25 Feb 2019 @ 6:14pm

      Re:

      Marketing ruin everything, first Television, then cable, and now the Internet will be moulded to their desires.

      reply to this | link to this | view in chronology ]

  • identicon
    Jane Vaginahat, 26 Feb 2019 @ 10:05am

    ...A primer in monetized feminism

    Re: empowering Women and Girls isnt just a motto...it has real world activity associated with it.

    In feminist theory /Barthes styled deconstruction, they focus on the issue of “the gaze ”if the viewer.

    Male gaze v female gaze are purportedly different, with male gaze being deviant, and prurient, and female gaze always pure, and wholesome.

    Even a cursory peek at feminist literature reveals that neither is true.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: I Invented Email
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.