Vermont IP Lawyer’s Techdirt Profile

jspitzen

About Vermont IP Lawyer




Vermont IP Lawyer’s Comments comment rss

  • May 15th, 2019 @ 9:17am

    Re: Letter re Potential 3rd Party Claims

    In a spirit of transparency, I admit that I almost always represent vendors rather than users. With that admission, let me offer another theory. I haven't looked at Adobe's actual fine print in a long time so this comment isn't limited to Adobe. Odds are, the license grant is perpetual, i.e., it continues indefinitely unless terminated because of breach by the licensee. These days, software support almost always costs extra so perhaps there is language that says we'll support the current version and one or two prior versions--seems to me fair enough for a vendor to say, if I fixed the problem you're complaining about in a new version, you'll have to update to that version to get the fix.

    The issue of infringement is typically a separate matter. The vendor often (not sure exact statistics) says it will indemnify for infringement unless the infringement is cured by switching to a new version to which the user could have, but failed, to to update. As a vendor, if one comes across potential infringement, it is VERY awkward to have to say to users "Please update because your current version infringes [insert name of i.p. owner]'s rights"--it's like wearing a "Kick Me" sign. Referring, as Adobe does, to unidentified third parties at the time an update becomes available is a not unreasonable approach.

  • May 9th, 2019 @ 4:49am

    Re: Re: Who Will "Moderate"?

    I mainly agree with this comment by Scary Devil Monastery. I would only add that it is challenging to predict how the Microsoft initiative will evolve. If it makes Microsoft money, Microsoft's competitors will notice and join in the fun so there will be alternative content moderating bots from AWS, Google, etc. and they might all be different in what they filter. Also, seems to me that at least some of these bot systems will offer configurable/trainable bots (perhaps neural network style) as not every bot user will want the same filtering effect. Might this somewhat mitigate the adverse consequences? Perhaps not--perhaps, as Scary Devil Monastery warns, it will just be one more path to a "chilling self-censorship effect."

  • May 7th, 2019 @ 4:00pm

    Who Will "Moderate"?

    One of the arguments often offered on this website has been that rules of this kind will tend to lock in the positions of the larger market participants (Google, Facebook, etc.) because only those companies will have the resources to attempt content moderation in a remotely sensible way. Per this line of argument (with which I am sympathetic) even the big guys will get it wrong a lot of the time and the little guys will be unable to allow user content without assuming massive risk. However, I just came across this article (https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/) suggesting that Microsoft is starting to offer a service that allows subscribers to rely on Microsoft services to decide what content is troublesome. The good news is that this somewhat levels the playing field between the big guys and the little guys. The bad news is that, instead of having to deal with government entities who can be sued for violating the First Amendment, we now have to deal with private entities not bound by the First Amendment.

    I don't know which of these bad aletrnatives is the lesser evil.

  • Apr 12th, 2019 @ 6:43am

    Re: Re: unjust law

    The concept of conspiracy as a crime is not so new. For example, here it is in a Massachusetts case in 1922 (Commonwealth v. Dyer): https://casetext.com/case/commonwealth-v-dyer-4. That case cites back to numerous earlier cases. I do not have a citation to offer but I would wager one could find discussions of the crime of conspiracy in English common law dating back at least to the 1800s. The Dyer case quotes an earlier case confirming a point made by Graham J above: "It is not always essential that the acts ... should constitute a criminal offence, for which, without the element of conspiracy, one alone could be indicted ... . "

  • Feb 23rd, 2019 @ 7:44am

    Another possibility

    The folks debating above seem to be considering these possibilities: (a) the work doesn't have an owner (Mr. Masnick's preference); (b) it is owned by the employer under some court-created doctrine analogous to work-for-hire; or (c) it is owned by the person who issued the "command" that started computer execution. Let me offer one more possibility. The Copyright Act allows for joint ownership: "A joint work is a work prepared by two or more individuals, with the intention that their separate contributions be merged into a single work." No guarantee that this argument would prevail but perhaps worth arguing that the person(s) who programmed the AI and the person(s) who used it have both made contributions with the intent that there contributions be combined into a joint work. The economic consequences of this conclusion (that all joint authors are entitled to a share of the profits) would certainly be a mess to administer but that's true for lots of other aspects of copyright law.

  • Oct 25th, 2018 @ 8:57am

    Re: New Work

    Gary's observation is 100% correct: copyright in the derivative work that is the translation belongs to the author of that derivative work. He is also correct that both the original translation and the new "official release" derived from the translation are both infringing. I often use this scenario as a teaching example with clients and young lawyers because the consequence of the mutual infringement is that each side theoretically has the power to enjoin the other side from distributing the infringing work. That consequence--mutually assured destruction--ought, in a rational world, to lead to a negotiated outcome, e.g., in which the first translator gets something: credit, money, etc.

  • Sep 18th, 2018 @ 12:44pm

    Anonymous Coward says "two glaring errors in your analysis"

    The two "errors" alleged are (1) once you launch an IPO and become a publicly traded company, you are voluntarily accepting a whole new set of regulatory framework that you have to operate under and (2)once your corporations gobbles up a big enough portion of the market share, you become a monopoly [that the] government has an obligation to break up [if they] stifle free enterprise (and free speech).

    As to #1, public companies are subject to an elaborate regulatory framework but it has to do with stuff like accurate quarterly reporting of financial performance and various other SEC-promulgated regulations. Those regulations do have some rules about what speech is and is not allowed but they are mostly to do with "quiet periods" when a public offering is pending. It would be quite a stretch to interpret those regulations as affecting content moderation outside of quiet periods.

    As to #2, now we are talking about antitrust law. Federal antitrust law does not make it illegal to become a monopoly but it does make it illegal to use monopoly power in specific ways: to fix prices, exclude competitors, etc. First of all, it seems to me doubtful that the biggest players of interest (Facebook, Google, etc.) are really monopolies as that notion is conventionally defined. Even if they are monopolies, is the regulation of, or failure to regulate, third party speech an illegal use of monopoly power? I am open to being enlightened but I am doubtful that that is an established legal principal.