This: https://en.wikipedia.org/wiki/Geo-blocking#New_Zealand
"This content is not in your country" is annoying. A law assisting it by making it outright illegal even when you didn't actually pirated the content for free is even worse.
This.
Once again, people blaming tool makers for them being abused when they're designed with broad purposes that are not necessarily illegal.
Someone makes a screw with a pointed end? Sorry, that can be interpreted as either as an alternative to a bullet or a stabbing tool.
Backdoor access for AI scraping is one thing, but bandwidth is another problem that even Wikipedia could not tolerate free AI crawling.
Not to mention, excluding (not to be confused with saving newer articles being blocked) old pages that have been saved can also be detrimental to the news site that requested it. When you nuke your web pages you previously allowed it in the past, you just cut off what Wikipedia relies on sources (Wikipedia sources points to a deleted article, that's no longer accessible on the WBM). And now, even more of the links to your webpages are from google that is already hurting your traffic with its AI overview.
Either way, this puts news sites in an awful lose-lose position:
* Rely on google for traffic? Hope that look-alike results don't bury you out.
* Rely on ads? Either rely on ads controlled by google, rely on lesser-intrusive ads with little payout or resort to some of the nastiest inventories that are more intrusive and disruptive and a step closer to the type of ads you find on adult sites, "sail the high seas", file hosting and link shorteners. You'll risk having users use a certain browser extension and hope you don't get into a vicious cycle of adblockers vs ads.
* Paywalls? Most people don't pay much.
* If AI generated content become so rampant, that we cannot tell if content is genuine or not, that may also make people just "not trust any news". Further killing traffic.
It's terrible if news sites are dying, and takes their archived articles down with them.
This video have reported the misshaps of tracer.ai, a company that claims to be a brand protector. Have taken down even content that merely says “Thanks Minecraft”, of a 2D sidescrolling inspired game with original graphics.
Github went absolutely all out on against takedowns against tools for the web. They went thorough on online tools being threatened.
They try to lower the bar on what is considered an effective technological protection measures, one via absence of a download button (or any on-webpage element), other with a website's robots.txt (just) saying which page is off-limits.
Youtube-dl doesn't even use AI at all.
That is the equivalent of a website owner sues users for 1201 infringement for using an adblocker, when they don't even have any anti-adblocker measures, such as model windows (box that you cannot click out of), or full-page replacement, telling them to turn it off.
Same goes with Luis Rossmann about making your browser ignoring javascript code. Javascript code itself doesn't do anything, it's the user's agent/browser that does stuff that may be dictated by the user, much like robots.txt that the scraper may follow or ignore. Another example that lets users override the javascript is alert(message), on Firefox browser, if a website spams alert boxes on the browser, if the user repeatedly closes it, eventually an option "Prevent this page from creating additional dialogs" will appear that if checked and pressed "ok", the browser will ignore it. That is there to prevent bugs or malicious scripts (especially tech support scam pages) from locking the browser. Website owners should not have a legal right to forbid especially this.
We have gotten past a point where from the News Media Alliance were going to war against paywall bypassing tools (took down 12ft and "Bypass Paywalls Clean"), to an area where 1201 is invoked over something that isn't even behind a paywall.
I'll be getting the popcorn once I become aware that tech support scammers are abusing 1201 against scam busters/cybersecurity companies for bypassing their browlocker mechanism on their webpage.
I don't believe that javscript code and websites that you can load a webpage on your browser without payment or logging in to be treated like a proprietary software guarded by section 1201. That shit should belong to members-only content, stuff behind EME. I prefer the open web to be separated from actually 1201-backed content.
If you are an artist or any content creator using a website based in the EU, or a worldwide international site like youtube using servers subject to their laws, you'll experience:
Having to give out your personal info. It's bad enough we have crappy age verification mandates to view content, now we have UGC verification mandates. And don't think it will be exclusive to the EU, trash laws tend to spread like cancer across the globe.
After your content is submitted, you either wait many years, or have AI moderate your content much sooner (and are prone to errors), before your content becomes public for anyone to see. Your content is illegal by default, like being judged guilty until proven innocent.
Accept the fact you may not even appear on search engines or even social media sites, if they have link previews. Quoting a post or a video is also banned.
Yeah, he made a video talking about the slaughtering grounds. Dev took that down. Jim Sterling makes another video about the Dev's behavior of him banning steam users criticizing the game for it being poorly made.
Now imagine he sends a second DMCA against the video talking about that.
Similar to RIAA's attack on youtube dl: Both involve trying to take down a tool or service on the web of an alleged "DRM" on a service they do not own (from youtube to download videos and from search engine results from google).
The difference: One tries to make an argument that “if a website does not provide a feature, than that user action of it is prohibited”, which threatens numerous browser extensions that “extends a feature” on the use of a webpage, Reddit's attempt is to use 1201 to create an assumption that “if a site have anti-bot checks, then it is illegal to use VPNs, alternative frontends, or other competing services to consume content from us outside our service”. They don't just restrict authorized scrapers, but 3rd party scrapers that scrape from the authorized scrapers as well. That's why they blocked the IA and demand them to forbid AI scrapers from gathering data from the WBM. And then when such a service does that to google search, they had the audacity to directly sue that 3rd party company from gathering data from the google search result.
That is the same site that have gotten into a controversy over paywalling 3rd party apps on the use of its API (the 2023 r/place was rightfully full of f*ck spez messages), including what is reminiscent to news sites demanding a link tax to show on your search engine, AI or not (actually this happened before google implemented AI overview, demanding to be paid for snippets and linking)
What does the pair have in common? They are tools to help users interact and experience online platforms in ways that the platforms don't intend. They are both threatened with lawsuits just because of that. They allow users to experience online content however they want.
In fact, attacks against online user freedom and the right to develop browser tools occurs even long before the first strike from Axel Springer against AdBlock plus. In 2009, a (not-so-great) ad-supported file hosting service Mediafire attempted to take down (and failed) a browser extension that all it does is automatically click on the download button, saying it violated their ToS that it steals bandwidth and against their "acceptable use policy". They believed that their ToS applies off-site and tools against their ToS doesn't belong on the internet. They were concerned that users can just AFK download their stuff without seeing ads. This is the same site that resorted to forced-opening random 3rd-party sites, a big no-no on online advertising (I've, in the past, have opened tech support scams sites for downloading files on that site, something I don't have to do when reading a news article, watch a youtube video, and to view arts on social media).
Github even talked about how an anti-circumvention ruling against YTDL could threaten adblockers for simply blocking ads just because a webpage doesn't provide a UI element to do so (not to be confused with blocking certain domains that Admiral does and Linkvertise against link shortener skippers). This ruling in germany implies that you are bounded to only use on-html features.
Yeah, if they decided that online website owners have the right to decide how you experience their especially god-awful malware-laden website (that I might as well call it glorified "sony rootkit - website edition"), good luck with your declining website traffic.
This could all have been avoided if they choose not to escalate the war on adblocking but rather to improve the user experience, make their ads not be intrusive and disruptive, and overall make their site user-friendly. Fix the problem, don't fight it.
Yes, I intentionally call them NE-WANG, after them putting a literal RANSOMWARE on public transit vehicles.
Sony could’ve done this to Mark Russinovich for exposing their DRM rootkit. Sue him for exposing their garbage and intrusive DRM.
You were right peter Kyle! You made not just criminals use VPN to evade law enforcement, but also ordinary people to also blend in with the types of people you go after like a Zebra's Motion Dazzle. Everyone is now on the privacy side.
I have seen youtubers reporting about Steam and Japanese art hit by this:
https://www.youtube.com/watch?v=QWp8bBd0k5c&t=63s&pp=0gcJCccJAYcqIYzv
https://www.reddit.com/r/visualnovels/comments/1caxsmc/japanese_adult_games_dev_take_on_credit_card/
In case if you are wondering, yes, countries like Japan does not prohibit porn content on the web. However when Artists from Japan want to create commissions, or other forms of getting paid on Skeb, Pixiv or other sites, they hit a roadblock that payment processors do not accept payments for adult content.
Japanese users are furious against unnecessary global bans against adult content.
Collective Shout is basically NCOSE in America. If anything, in countries that don't ban NSFW, websites should require login to view such content with an option for the viewer whether or not he want to see such content, without intrusive age verification, and that parents themselves (not the government) should set up parental controls against their child so they don't not only view inappropriate content, but also other dangerous things they could go to.
A radically different approach to tackling AI bots is to move collections behind a login
More generally, this would be a terrible move for the open Web, which has at its heart the frictionless access to knowledge.
Not only you are saying goodbye to convenient access for users (could reduce your traffic because users don't like this, and why BugMeNot exits) to browse a website, it also cost a search indexing to appear on search engines (if google cannot find a sentence of something the user have searched because it's behind a login wall, it won't appear). Which is why robots.txt fixes this before AI wreak havoc on the web.
News sites are having a problem with google's forcing to either allow them to AI train on their works, or not appear on the search results at all. Already the risk facing declining traffic.
Youtube already have dominated the news about war against adblocking, copied Hulu's pause ads on TV and mobile app, and introduced double non-skip ads.
An article in MIT Technology Review by Shayne Longpre warns that publishers may respond to this challenge in another way, by blocking all crawlers unless they are licensed. That may solve the problem for those sites, and allow deep-pocketed AI companies to train their systems on the licensed material, but many others will lose out
What well-known search engine crawls the web? That's right Google, along with some other big tech companies on the web. The EU, Austrilia (News Media Bargaining Code), and Canada (Online News Act) were desperate of asking for money. Ads pay less, not many users subscribing, and that the fact that search engine's not necessarily doing the news sites a favor (such as being stuck between indexing and AI trained, or not appearing on the google search indexing at all.
Threats on making geo-blocking legally enforcable have happened in 2015
This: https://en.wikipedia.org/wiki/Geo-blocking#New_Zealand "This content is not in your country" is annoying. A law assisting it by making it outright illegal even when you didn't actually pirated the content for free is even worse.
Sounds comparable to demanding browsers to block sites
This. Once again, people blaming tool makers for them being abused when they're designed with broad purposes that are not necessarily illegal. Someone makes a screw with a pointed end? Sorry, that can be interpreted as either as an alternative to a bullet or a stabbing tool.
Would be very dumb if the WBM ever allowed AI crawlers, and lose-lose situation.
Backdoor access for AI scraping is one thing, but bandwidth is another problem that even Wikipedia could not tolerate free AI crawling. Not to mention, excluding (not to be confused with saving newer articles being blocked) old pages that have been saved can also be detrimental to the news site that requested it. When you nuke your web pages you previously allowed it in the past, you just cut off what Wikipedia relies on sources (Wikipedia sources points to a deleted article, that's no longer accessible on the WBM). And now, even more of the links to your webpages are from google that is already hurting your traffic with its AI overview. Either way, this puts news sites in an awful lose-lose position: * Rely on google for traffic? Hope that look-alike results don't bury you out. * Rely on ads? Either rely on ads controlled by google, rely on lesser-intrusive ads with little payout or resort to some of the nastiest inventories that are more intrusive and disruptive and a step closer to the type of ads you find on adult sites, "sail the high seas", file hosting and link shorteners. You'll risk having users use a certain browser extension and hope you don't get into a vicious cycle of adblockers vs ads. * Paywalls? Most people don't pay much. * If AI generated content become so rampant, that we cannot tell if content is genuine or not, that may also make people just "not trust any news". Further killing traffic. It's terrible if news sites are dying, and takes their archived articles down with them.
this isn’t new
This video have reported the misshaps of tracer.ai, a company that claims to be a brand protector. Have taken down even content that merely says “Thanks Minecraft”, of a 2D sidescrolling inspired game with original graphics.
Once again, similarities with the youtube-dl situation.
Github went absolutely all out on against takedowns against tools for the web. They went thorough on online tools being threatened. They try to lower the bar on what is considered an effective technological protection measures, one via absence of a download button (or any on-webpage element), other with a website's robots.txt (just) saying which page is off-limits. Youtube-dl doesn't even use AI at all. That is the equivalent of a website owner sues users for 1201 infringement for using an adblocker, when they don't even have any anti-adblocker measures, such as model windows (box that you cannot click out of), or full-page replacement, telling them to turn it off. Same goes with Luis Rossmann about making your browser ignoring javascript code. Javascript code itself doesn't do anything, it's the user's agent/browser that does stuff that may be dictated by the user, much like robots.txt that the scraper may follow or ignore. Another example that lets users override the javascript is alert(message), on Firefox browser, if a website spams alert boxes on the browser, if the user repeatedly closes it, eventually an option "Prevent this page from creating additional dialogs" will appear that if checked and pressed "ok", the browser will ignore it. That is there to prevent bugs or malicious scripts (especially tech support scam pages) from locking the browser. Website owners should not have a legal right to forbid especially this. We have gotten past a point where from the News Media Alliance were going to war against paywall bypassing tools (took down 12ft and "Bypass Paywalls Clean"), to an area where 1201 is invoked over something that isn't even behind a paywall. I'll be getting the popcorn once I become aware that tech support scammers are abusing 1201 against scam busters/cybersecurity companies for bypassing their browlocker mechanism on their webpage. I don't believe that javscript code and websites that you can load a webpage on your browser without payment or logging in to be treated like a proprietary software guarded by section 1201. That shit should belong to members-only content, stuff behind EME. I prefer the open web to be separated from actually 1201-backed content.
The EU government doesn't know how the internet works.
If you are an artist or any content creator using a website based in the EU, or a worldwide international site like youtube using servers subject to their laws, you'll experience:
- Having to give out your personal info. It's bad enough we have crappy age verification mandates to view content, now we have UGC verification mandates. And don't think it will be exclusive to the EU, trash laws tend to spread like cancer across the globe.
- After your content is submitted, you either wait many years, or have AI moderate your content much sooner (and are prone to errors), before your content becomes public for anyone to see. Your content is illegal by default, like being judged guilty until proven innocent.
- Accept the fact you may not even appear on search engines or even social media sites, if they have link previews. Quoting a post or a video is also banned.
These laws/bills might as well directly ban UGC.These people assume that other people should be the child's parents.
I don't agree that the internet should be a friggen daycare for the young. They're just advocating that everyone online should be the child's parent.
Imagine Jim Sterling got his video taken down talking about a bad asset flipper's bad behavior of taking down his review video
Yeah, he made a video talking about the slaughtering grounds. Dev took that down. Jim Sterling makes another video about the Dev's behavior of him banning steam users criticizing the game for it being poorly made. Now imagine he sends a second DMCA against the video talking about that.
Sigh, another 1201 attack against actions on the open web.
Similar to RIAA's attack on youtube dl: Both involve trying to take down a tool or service on the web of an alleged "DRM" on a service they do not own (from youtube to download videos and from search engine results from google). The difference: One tries to make an argument that “if a website does not provide a feature, than that user action of it is prohibited”, which threatens numerous browser extensions that “extends a feature” on the use of a webpage, Reddit's attempt is to use 1201 to create an assumption that “if a site have anti-bot checks, then it is illegal to use VPNs, alternative frontends, or other competing services to consume content from us outside our service”. They don't just restrict authorized scrapers, but 3rd party scrapers that scrape from the authorized scrapers as well. That's why they blocked the IA and demand them to forbid AI scrapers from gathering data from the WBM. And then when such a service does that to google search, they had the audacity to directly sue that 3rd party company from gathering data from the google search result. That is the same site that have gotten into a controversy over paywalling 3rd party apps on the use of its API (the 2023 r/place was rightfully full of f*ck spez messages), including what is reminiscent to news sites demanding a link tax to show on your search engine, AI or not (actually this happened before google implemented AI overview, demanding to be paid for snippets and linking)
Crashed the software, crash the vehicle
Just imagine if the buggy update was mandatory in order to use the vehicle.
Adblock plus: Who are you? Youtube-DL: I'm you but download (public) videos instead
What does the pair have in common? They are tools to help users interact and experience online platforms in ways that the platforms don't intend. They are both threatened with lawsuits just because of that. They allow users to experience online content however they want. In fact, attacks against online user freedom and the right to develop browser tools occurs even long before the first strike from Axel Springer against AdBlock plus. In 2009, a (not-so-great) ad-supported file hosting service Mediafire attempted to take down (and failed) a browser extension that all it does is automatically click on the download button, saying it violated their ToS that it steals bandwidth and against their "acceptable use policy". They believed that their ToS applies off-site and tools against their ToS doesn't belong on the internet. They were concerned that users can just AFK download their stuff without seeing ads. This is the same site that resorted to forced-opening random 3rd-party sites, a big no-no on online advertising (I've, in the past, have opened tech support scams sites for downloading files on that site, something I don't have to do when reading a news article, watch a youtube video, and to view arts on social media). Github even talked about how an anti-circumvention ruling against YTDL could threaten adblockers for simply blocking ads just because a webpage doesn't provide a UI element to do so (not to be confused with blocking certain domains that Admiral does and Linkvertise against link shortener skippers). This ruling in germany implies that you are bounded to only use on-html features. Yeah, if they decided that online website owners have the right to decide how you experience their especially god-awful malware-laden website (that I might as well call it glorified "sony rootkit - website edition"), good luck with your declining website traffic. This could all have been avoided if they choose not to escalate the war on adblocking but rather to improve the user experience, make their ads not be intrusive and disruptive, and overall make their site user-friendly. Fix the problem, don't fight it.
Now imagine if malware makers attacked security researchers, oh wait, NEWANG is a malware maker
Yes, I intentionally call them NE-WANG, after them putting a literal RANSOMWARE on public transit vehicles. Sony could’ve done this to Mark Russinovich for exposing their DRM rootkit. Sue him for exposing their garbage and intrusive DRM.
So basically, the OSA made privacy-minded people to hide like online predators.
You were right peter Kyle! You made not just criminals use VPN to evade law enforcement, but also ordinary people to also blend in with the types of people you go after like a Zebra's Motion Dazzle. Everyone is now on the privacy side.
Anti-Sexual Exploitation organizations are cancer.
I have seen youtubers reporting about Steam and Japanese art hit by this: https://www.youtube.com/watch?v=QWp8bBd0k5c&t=63s&pp=0gcJCccJAYcqIYzv https://www.reddit.com/r/visualnovels/comments/1caxsmc/japanese_adult_games_dev_take_on_credit_card/ In case if you are wondering, yes, countries like Japan does not prohibit porn content on the web. However when Artists from Japan want to create commissions, or other forms of getting paid on Skeb, Pixiv or other sites, they hit a roadblock that payment processors do not accept payments for adult content. Japanese users are furious against unnecessary global bans against adult content. Collective Shout is basically NCOSE in America. If anything, in countries that don't ban NSFW, websites should require login to view such content with an option for the viewer whether or not he want to see such content, without intrusive age verification, and that parents themselves (not the government) should set up parental controls against their child so they don't not only view inappropriate content, but also other dangerous things they could go to.
Login wall's second problem
These text:
Not only you are saying goodbye to convenient access for users (could reduce your traffic because users don't like this, and why BugMeNot exits) to browse a website, it also cost a search indexing to appear on search engines (if google cannot find a sentence of something the user have searched because it's behind a login wall, it won't appear). Which is why robots.txt fixes this before AI wreak havoc on the web. News sites are having a problem with google's forcing to either allow them to AI train on their works, or not appear on the search results at all. Already the risk facing declining traffic.Reminds me of Microsoft Tay
https://en.m.wikipedia.org/wiki/Tay_(chatbot) This time, it’s Musk’s turn.
At this point, we might as well see a race to see who's the worse
Youtube already have dominated the news about war against adblocking, copied Hulu's pause ads on TV and mobile app, and introduced double non-skip ads.
More reasons to despise sites steering users to use app instead of browsers
I don’t need an app for every website I visit and I have tons of apps on my homescreen. Now websites want us to use glorified spyware.
Trump: Please support Elon Musk's Starlink.
Trump to other countries: Tariff threat if you don't like Starlink Trump to Africa: We help you if you help Musk's Starlink.
This also explains about this link tax thing when charging to crawl