kinemaxx's Techdirt Profile

kinemaxx

About kinemaxx

kinemaxx's Comments comment rss

  • Sep 04, 2025 @ 10:00pm

    Is this article written by an AI infiltration team?

    Wait, the only solution to sophisticated state actors infiltrating open source projects is to make sure that people with lots of money (ie: state actors) are able to finance and support those projects (possibly including actually building the teams working on it), with the added leverage that you now depend on them for financial support? I'd almost think the article was written by one of these AI infiltrators.

  • Apr 20, 2023 @ 05:43pm

    More than labels

    Hopefully there's more to it than the labels themselves. The idea itself sounds like a moderation version of uBlock Origin. You can have the "for dummies" setup to start with, but the beauty is in how well you can refine the results, and share your own or use other people's moderation setup. I'd also expect it to be able to tag content to the level of the Danbooru sites — hundreds of potential tags, each of which can be contained within broader categories. For example, Naruto (the anime) is different from Naruto (the character) is different than naruto (the ramen). Probably don't show all of them at the typical visual level, but you should be able to refine your personal moderation filter based on something far more complex than "Hate Groups".

  • Jan 12, 2023 @ 07:03pm

    More background

    You missed some of the background of why the OGL was originally created. In the 90's, TSR (the previous owners of D&D) were incredibly sue-happy, going after all kinds of third party publishers who were making D&D supplements. Nothing ever went to trial, but the lawsuits were designed to be as costly as possible to defend against, and I think bankrupted at least one company. After Wizards of the Coast bought the D&D IP, they needed to get those third parties back on board with trusting them and making D&D content for the reasons Dancey explains, and the OGL was a major part of that. It guaranteed that WotC would not sue them as long as they kept within certain bounds of what they published. Cory Doctorow makes a reasonable case that the OGL doesn't actually give anyone any real rights, that most everything that's allowed is not actually copyrightable, but entirely misses the point that being legally correct and being not sued are two very different things. The OGL gave third party publishers the confidence to participate that led to the explosion in content that really expanded D&D's reach and worth to what it is today.

  • Nov 29, 2017 @ 12:36pm

    I'm not finding this opening article very convincing. It seems to be very waffle-y and hand-wavy about its points, to a degree that this site usually heavily criticizes other public speakers for. It feels like it didn't go through a proper critique and editing process.

    The individual points (bold 1, 2, and 3) are arguments that I can see coming up, and are relevant to the premise of the article. However the explanations for them, and the excusing of them as not being "real" arguments, is so poorly supported and detailed that I'd expect this to be to be run on some click-bait news site, not TechDirt.

  • Nov 08, 2016 @ 12:21pm

    Ehhhh.... There's a reason that such things are against the law. It might seem cute for the above case, but vote buying (ie: paying for votes, and then having proof that such vote was made) is a type of fraud (or blackmail) we can guard against (sort of), and not guarding against fraud is pure negligence.

    Cell phones everywhere has kind of broken the picture/evidence aspect, even if there are some stations that try to keep phones out of the voting booth. But anonymous and secure voting is one of the necessities of a working democratic system (and part of what makes it so damn difficult). I'm not even sure it's possible to solve the problem entirely, anymore, but it should not be so casually dismissed.

    Ignoring that is definitely not something I'd expect from TechDirt.

  • Mar 10, 2016 @ 02:47pm

    First, noting them as having this anti-ad blocking kinda sorta doesn't quite work. Are you going to update the post you had about a Wired article that you put up a year ago because they're doing this now? If they drop the plan after a month, people reading the articles that had been posted in the meantime now see a warning that scares them off, that's no longer valid.

    I guess you might be able to use CSS tricks to style "known" sites, and just have a single rule in the CSS file that adds an ::after rule with that little message. As long as you keep that up-to-date, it would be valid for all instances of linking to that site, from any articles on this site.

    The primary thing I'd want to be sure of is that you include sufficient portions of the original article that being blocked from the other site doesn't hinder understanding of the story. Luckily, you tend to do that anyway.

    As for the actual warning: Well, as you've noted, using or not using an ad blocker is not itself an indication of whether you'll be blocked, and I'll often give temporary permissions to a site, so that even if blocked at first, I can still choose to allow them on a case-by-case basis. So I'd put it more as a caution than a warning. (eg: "Note: the linked-to site is known to run code that may prevent you from viewing its contents if you use ad-blocking or security software.") Maybe even just put in a [Caution] marker that puts the full explanation in a tooltip.

  • Nov 24, 2015 @ 12:33pm

    'Mansplaining'

    To add to the WTF factor: The twitter feed has swerved into accusing the guy who pointed out the writer's gross mistakes of 'mansplaining', with comments such as:

    He sounds like a typical silicon valley misogynist scrambling to keep women like you out of his boys club.

    Apologise for trying to discredit Clare's opinions & for undermining her autonomy just because she's a woman.

    Personally I'd suggest he also learn to step back when women express their opinions.

    to act as a shield against criticism.

    As if gross journalistic negligence should not be criticized because the writer was a woman. As if being a woman had any relevance whatsoever to the objections being raised.

  • Sep 29, 2015 @ 01:09pm

    Re: Re: Re: Re: No, everything is OK!

    no idea why they need that much code for something that can be stuffed in an tag

    Should be, stuffed in an "audio" tag.

  • Sep 29, 2015 @ 01:06pm

    Re: Re: Re: No, everything is OK!

    G+ isn't mandatory: I've been on the 'net far longer than you and I don't have an account there.

    Allow me to clarify: Mandatory for one of Google's services that I needed at some point, when they were trying to tie Google+ into everything they had.


    As a continuation of the look at this page:

    Just did a count of the lines of javascript code on this page (article page, not the front page). There are at least 40 javascript code files or other chunks of HTML to be inserted into the page. 6 of those I couldn't count because the code inspector didn't want to un-compact them, but there was enough horizontal scroll to imply at least hundreds of lines of code. The remaining 34 add up to a total of 102,000 lines of code.

    So, a single article on this site uses well over 100,000 lines of code due to all the other stuff it links to. The entirety of the HTML for the page, including all of the comments (which are irrelevant to the 100k code line value) takes up just 2400 lines (word-wrapped at 110 columns).

    20k lines of that is for the Soundcloud audio player widget; no idea why they need that much code for something that can be stuffed in an tag. But that still leaves over 80k lines of code being pushed for a simple article plus comments. Oh, and the comments? 145 lines of code. In fact, all TechDirt javascript put together is barely more than 800 lines of code — 1% of all non-widget javascript on this page.

    So, yeah.. why do we need all this crap? The entirety of what I came here to see is just 3% of the page, in terms of code (never mind the ads themselves).

    (Note: Haven't tested with the "no ads" cookie set yet.)

  • Sep 29, 2015 @ 12:06pm

    Re: No, everything is OK!

    Eh, some of them are OK. Gravatar gives you the cute little avatar graphics. Twitter, Reddit, LinkedIn and Facebook are obviously for the social media crowd. And since they put their podcasts on Soundcloud, I can see why there would be links to that and its CDN.

    No clue what flattr is, but am guessing it's another social media thing.

    The ad side would be Amazon, scorecardresearch, quantserve, which is actually a fairly limited set out of that entire list.

    Still, that is a ludicrous number of third party sites that each page has to connect to. The need for so many separate domains for some of them (eg: 4 domains for Soundcloud) is indicative of the problems with all the stuff they're trying to do (and thus avoid too many connections to the same domain; this might improve with http 2).

    It always seems like there ought to be a better way. The social media in particular — what exactly are they doing that needs to be running a script on every page load? Is a little button with a link to Facebook or whatever not good enough?


    Aside: I would be just as happy if there were a way to completely disable the social media links, but I recognize that I'm an outlier in not having any social media accounts at all, other than the mandatory Google+. However all of them pile up to cause a significant hit in page load times.

  • Sep 15, 2015 @ 03:51pm

    Erfworld (http://www.erfworld.com/blog/view/47176/bad-ads-killed-new-ads-tried-and-augusts-armored-dwagon) just had to go through an ad cleanup, and their problem was that all the worst ads were also their highest-paying. They had to dump a bunch of revenue to get rid of ads that were actually crashing the site.

    But their experience also makes it obvious what sort of cycle is happening. Publishers are incentivized to push out the ads that pay the most, that the users hate the most, and are most likely to want to block. So users block more, and the publisher sees a revenue drop, so they look into what will pay more to make up the difference, which gives them more ads that users want to block even more...

    And of course there's the ongoing problem of malware in advertising networks. Publishers don't want to vet ads and serve them from their own domain (ie: take responsibility for what they're actually serving to their users). Ad networks want to track you all over the internet, and run auctions to maximize the value (for them) of every ad they serve. The only way for the user to win is to get out of the game entirely — which means ad blockers.

  • Sep 03, 2015 @ 02:20pm

    The two pictures on the FoodNavigator site would be very hard to confuse. The two pictures here on TechDirt, however, could be confused very easily (not as an identical product, but as a very closely related product that might be sold by the same company).

    Whether it's close enough to be infringing is obviously a decision for the courts, but it's close enough that I wouldn't immediately toss the case out.

  • May 14, 2015 @ 06:58pm

    Despite the fact that it's still a scam, I'll still grant that the exec's comment about "too many choices" being a bad thing is a reasonable point. Look at the current Windows 10 SKU argument. With just 7 SKUs, each with a reasonably different target use, there are a ton of complaints about "why not just have 1 version and be done with it?"

    There's plenty of specific differences between the two situations that don't make it a perfect comparison, but people will certainly complain about "too much" choice just as easily as they will about too little.

    Consider the 'ideal' a la carte in cable: You have something like 200 channels, and when you sign up you need to select every individual one you want. How many people would be griping about going through all that hassle? Of not being aware that some obscure channel somewhere down the list is actually something they're interested in? That they're in a sort of "buy before you try" situation, where you don't actually know exactly what each channel provides, and which ones you really want.

    Even with the high-profile ones, out of a couple dozen ESPN channels, which ones actually carry the shows and sports that you, specifically, are interested in?

    So, yeah, the company execs may be sticking their head in the sand about some aspects, but their critics are also ignoring a lot of the practicalities of implementing that for the average user.

  • Apr 29, 2015 @ 10:21pm

    Re: Re: Re: Re:

    There are a lot of points that can be said both for and against your statements, across a wide range of how such a system could be applied, and I can't legitimately argue that you're either right or wrong (it requires getting into much more esoteric discussions and research, and I'm not up for doing that in a comment thread like this). Mostly the feeling I get is that your proposals are tied heavily to idealism, and not giving enough consideration to practical reality and human nature.

    Basically, it needs a heavy dose of "How can I break or exploit this system?", as well as tackling a large number of user experience issues, more mathematical analysis, and lots of thought as to "Why should it work this way rather than that way, and what's the ramifications of each?" (regarding things such as copyrighting everyone's text messages, where edge cases matter)

    Regarding the specific question of "Why not go back to that?" (for opt-in), that ignores the question of "Why did they change it in the first place?" I don't have a transcript of those discussions, nor do I have experience in the use of the pre-1977 system, so all I can say is: Don't change the system if you don't know what the problems with the old system were. If you don't know what the problems were, you can't fix them, or properly compare the competing options.

  • Apr 29, 2015 @ 07:25pm

    Re: Re:

    Or to put it another way, it's probably not higher, and it's probably not lower? Sounds like a workable definition to me.

    You have that backwards (or maybe inside out). It probably is higher or lower, but there's no preference as to which direction to lean (ie: not more likely to be higher, or more likely to be lower, vs the other option).

    Also, as I said, Spiderman isn't really a good example, but then there aren't any examples right now because of the way the current copyright system works. Maybe the Harry Potter series would work better to illustrate.

    The point is, the shorter the duration, the easier it is to wait out the copyright rather than license. And, given that part of the point of the copyright monopoly is to have control over that licensing, the easier it is to avoid it entirely, the less value copyright has at all. Despite the fact that that's fundamentally part of how we want things to work, it's still an exploit because it lets you game the system.

    So you want it long enough that licensing is preferred if someone wants to ride the popularity of an existing work, but not so long that someone can't easily pick it up within a generation (~20 years).

    On the issue of renewals, I'll grant that there are valid approaches on that side, numerically. However the maintenance overhead for both the producer and consumer seems like it could be far more trouble than a mere handwaved dismissal. A searchable database like the patent office's, though... maybe.

    For a renewal system, I'd probably propose something like: All works are automatically copyrighted for 10 years. After that, a work can be given renewals of 5 year durations, for an initial fee of $1000, and tripling every additional renewal. The total cost of renewing a work out to 20 years would be $4000, and to 30 years would be $40,000, and would cost $81,000 for the next step. An alternate version would be a 15 year initial duration, $10,000 fee for the first extension, and doubling thereafter.

    There would be no upper limit, except the point where the copyright holder decides it's no longer worthwhile to continue to pay the renewal fee. Due to the exponential growth, that point will be hit eventually.

    However, despite the fact that it would eventually force a work to the public domain, I don't really like it, as it's far easier for a large corp to tie works up even if they take a loss, so there would be a tendency for small-time creators to have fairly short durations, while big corps would have much longer durations. That would tend towards a preference for assigning the copyright to a larger corp that can more easily afford to stockpile longer-held works, shifting the balance of power away from the smaller creators.

    Thus a money-based limiter is probably not a good solution. Any limits should be built in directly.

    Thus I'm back to a preference for ~20 years as a flat duration, or up to 30 years if it includes an abandonment expiration.

    Of course.. a forced abandonment expiration system could be exploited to. If a company has a contract with an author that prevents them from publishing with someone else, and then just sits on the work without publishing it themselves, they could force a work to be abandoned via the 3-year clause I proposed.

    So perhaps the flat 15 year automatic, plus optional renewals of either a single 15 year block, or maybe three 5 year blocks, would give the best balance, while putting a hard cap on the duration that isn't based around money. That would give you a range between the 50th percentile and the 95th percentile, to be flexible enough to give a near-optimal result for almost all cases.

  • Apr 29, 2015 @ 04:53pm

    Re: Re:

    From the actual original paper:

    Table 1, by nature of its form, implicitly gives the inaccurate impression that each of the outcomes [nb: optimal copyright term length based on underlying math] listed is equally likely. We present the distribution function in Figure 3. As this shows, the mode [nb: highest probability for any single length to be the optimal result] of the distribution is just under 20 years.


    Then, describing the percentiles I was referring to:
    From the underlying cumulative distribution function we can calculate percentiles. The 25th percentile is 11 years, the 50th (the median) at 15 years, the 75th at 21 years and the 95th percentile at 31 years, the 99th percentile at 38 years and the 99.9th percentile at 47
    years.

    That is, the cumulative probability that the optimal copyright term length is N years or less. Thus, 25% chance that it is 11 years or less, 50% chance that it is 15 years or less, 75% chance that it is 21 years or less, etc.

    This leads to:
    This would suggest, that at least under the parameter ranges used here, one can be extremely confident that copyright term should be 50 years or less – and it is highly like[ly] that [the] optimal term should be under 30 years (95th percentile).


    And the concluding remark:
    Using the estimates for these variables derived from the available empirical data we obtained a point estimate for optimal copyright term of approximately 15 years (with a 99% confidence interval extending up to 38 years).


    That is, 15 years if the estimated value, but the confidence interval stretches up significantly higher (38 years for the 99% confidence interval).

    The lower limit isn't given, and I don't feel like doing the math, but assuming an arbitrary value of 7 years as the lower 99% limit, you could present it as: a 99% probability that the optimal copyright length is between 7 years and 38 years, with the median estimate being 15 years.

    Mike's use of 15 years is not 'wrong', but it is misleading, because it ignores the confidence interval. And, in fact, it's almost certainly not the actual ideal value, simply because of statistical variance. The most likely value is in the 18-19 year range.

    As an aside: There are actually errors in the paper, where the written text does not match up with the charted graphs, so I'm somewhat skeptical of the 15 year value, in and of itself. For example:

    With our default discount rate of 6% and cultural decay of 5% this implies an optimal copyright term of around fifteen and a half years.

    However if you look at the referenced chart, the calculated value for 5% decay/6% discount is 18.5 years, not 15.5 years.

    In addition, Figure 3, being a graph of probabilities, looks like its midway point should be around 22-24 years or so, not 15.

    Given that there are reasonable arguments to be made in favor of a longer term (that is, no 'ideal' value will survive contact with humans intent on exploiting it), and that there are some questionable elements in the paper (I should really see about running the numbers myself), it makes sense to tilt a legislated value towards the conservative end of the confidence interval. I could see anything between 20 and 30 years as being reasonable.

  • Apr 29, 2015 @ 12:28pm

    Looking at Pollock's analysis, 15 years isn't really the 'ideal' length; it's the median (50th percentile confidence level) value implying that the ideal is X years or less. The 75th percentile is at 21 years, 95th at 31 years, 99th at 38 years, and 99.9th at 47 years.

    From that, 20, 30, 40 and 50 years are all also valid, based on how much certainty you want about the overall value.

    I would be against any term renewals for copyright. It's too annoying to know if a work which is past the first term length (which you can determine from the copyright date on the work itself) was actually renewed. I'd prefer a fixed length, non-renewable form.

    The main problem with shorter lengths is abusive exploits by other industries (ie: movies vulturing books). For example, the first Spiderman movie was 40 years after the original comic publishing date. Maybe not the best example, but you can certainly count on such exploits being attempted.

    From all of the above, I would go for 30 years (95th percentile), but with an additional factor — loss of copyright.

    Loss of copyright would be an explicit removal of copyright, reverting it to the public domain. I would designate this as being 3 years after it is considered 'abandoned'. The creator can also explicitly give up his/her copyright at any time.

    What constitutes being 'abandoned' depends on the type of work.

    For commercial works, like books or DVDs, that would be when they're considered "out of print" — no longer published or made available for sale from the copyright holder.

    For software, it would also include when the software is no longer supported (ie: no more updates or bug fixes). For example, Windows XP's support officially ended April 8, 2014, so it would lose its copyrighted status on April 8, 2017.

    For non-commercial works, such as webcomics, fan movies, freeware, etc, it's a bit more difficult to decide what constitutes making something available. Is a freeware program on SourceForge that hasn't been touched in 15 years still valid? You might actually have a similar contention with commercial works. Is something that's only sold on VHS or laserdisc still 'available'? I would probably add the condition that the product still be 'consumable' on a standard, modernly-available device (ie: runs on a current OS, can play back on a system that you can buy at Walmart, etc).

    So, basically, anything that is considered "out of print" (which isn't the same as "hasn't sold any") for 3 or more years reverts to the public domain, since it clearly shows that the original copyright owner no longer finds value in maintaining the work themselves.

    Of course, it's fairly easy to use that as a loophole with digital works. Since there's basically no cost to 'make them available', they can bypass any risk of loss of copyright even if there's no real value in maintaining them. There are other nitpicky bits that would need fine-tuning, but overall I think this seems a reasonable approach.

  • Apr 28, 2015 @ 01:10am

    Re:

    Note that the above was just a bit of an idea that related to the test-taking stuff that was mentioned, and I wanted to write it out before I forgot it. On the Finnish educational system side, with more focus on personalization than scores, it's somewhat of a secondary consideration.

  • Apr 28, 2015 @ 01:00am

    Random idea that came to mind from a couple bits from the article and comments, along with another vaguely related idea on human motivations. Probably won't ever be applied, but it seemed interesting, and I might as well write it down somewhere.

    Right now we take a test and create a 'grade' — what percentage of the test did we successfully complete?

    So how about changing that to instead create a 'score' — not the score we usually see on a test, but a score like you'd get in a video game. A progressively advancing achievement number.

    Possible implementation: Every single test given over the course of the school year is exactly the same (with random variations for each individual question, of course). For example (simplified view), 10 questions on addition/subtraction; 10 questions on multiplication/division; 10 questions on fractions; 10 questions on decimals; 10 questions on variable substitution; etc, etc. Adjust to the actual subject as appropriate. Higher tier questions get more points per question.

    Each question is given a point rating, and the further into the test you get, the higher points per question. You accumulate points based on the number of questions you successfully answer, with the goal of getting the highest number of points by the end of the year.

    You then have a final class grade that's a composite of your best total score, and the rate of improvement over the year.

    For the improvement side, since the test is essentially the same throughout the year, you can then graph the results and see how the student is progressing over time, when they seem to get stuck (plateauing of the score), when they find stuff easy (spike in the score), etc.

    The same simple questions are kept on each test, repeated over and over, that the students will answer because even if they don't give many points each, they're guaranteed easy points. And that constant refreshing of the basics helps solidify understanding to an instinctual level.

    The test is still limited by time, so it's not just a matter of "do you understand this specific topic, and are able to regurgitate the answer", but "do you understand this topic well enough to answer it quickly along with all the other stuff you've learned this year?"

    I'd also suggest making the tests with several dozen variants for each individual question, and have a program that can spit out a randomized version of each test for each student, to make sure it's not just a test of memorization (as well as avoid some types of cheating).

    Anyway, make it so that you're essentially trying to get a "high score" by the end of the year, and where the actual improvement over the course of the year is just as significant as the final score. You can then use the final score to determine whether someone is qualified to move on to the next course in the subject, and the improvement rate to collect students together with those of similar learning rates.

    Note that while technically this is doable with typical scantron methods, I personally find such testing methods cheap and lazy, and would avoid them if possible. It probably wouldn't work well with the randomized test questions, either.

  • Apr 06, 2015 @ 12:46pm

    Re: Re: My daughter is deaf.

    Why doesn't every movie theater have closed captioning?

    Most likely it can be argued as because each individual isn't capable of choosing whether to have said captioning on or off, out of the hundreds of people in the theater. Zonker's post implies both that AMC can provide closed captioning, and that it depends on the movie makers to provide said captioning. A quick Google search showed relevant devices to enable this.

    offtopic:

    On the other hand, while it is a very niche need, it does seem like it would be cool to have it so that you could use the camera on your tablet or phone to take a picture of your movie ticket (or use the digital ticket, if bought online, etc) in order to get the encryption key that would allow you to watch a streamed version of the movie in a theater area that's broadcast simultaneously with the movie being played, and have that streamed version have optional subtitles.

    Basically, a means of getting the subtitles with commonly available tech instead of specialized tech. Doesn't work for everything, but it seems interesting.

    /offtopic

    YouTube is big, should they have to close caption all their videos? Or require users to caption videos before they're made public?

    Same as every other answer: YouTube has to provide the technical capabilities (which they do), the 'producer' has to provide the subtitles. Since this is a business mandate, individual users are not required to provide subtitles for every video they upload, but I'd expect, say, ESPN's channel to be required to have captions on their videos the same as they'd need to have captions on their TV broadcasts.

More comments from kinemaxx >>