I picked up on the scare tactic Gaudino used, that Big Ag and Big Pharma would come do the patenting if the small players don't. I agree with Masnick about the ridiculousness of patents in this area and how they kill innovation, but this threat seems like a real risk. It's the system perpetuating the system. If you don't do it, someone else will and then you will be hosed. That's another of the perversities of the patent system.
It seems the push these days is to transform corporations -- which used to have to work within a country's governmental apparatus -- into extra-governmental entities that instead of adhering to a country's laws instead dictate those laws to them.
I do not want to live in the sovereign state of Big Business!
The fundamental problem here is that politicians are not accountable to the voters. They are accountable to industry. That's why so much of the TPP protects legacy players in user-unfriendly ways.
There may be a short-term solution in the form of public outrage and such. If so, count me in. The long-term solution is to find a way to get politicians to be accountable to voters again. I think it is real campaign finance reform. But whatever, if you care about things like the TPP (or any other similar issue), what you should really care about first is fixing campaign finance. You can't get politicians to fix anything in favor of voters until they feel accountable to the voters.
Anti-circumvention is like saying it is illegal for me to break my own front door (say, if I locked my keys in the house). It may be illegal for me to break down someone else's door, but I can kick in my own door if I want. Same should go for my digital goods.
The value in advertising is not what a seller gets out of doing it, but what they lose by not doing it. If you have two equivalent products X and Y, and X advertises but Y does not, Y will lose. So if one advertises, both must.
That's only one layer in the logic of advertising, though. As others have pointed out, it is complex and illogical, and works in ways we may not even understand or be aware of.
There isn't "truth" in advertising because there isn't "truth" in purchasing. It's a two-sided coin (multi-headed hydra!).
I hate ads as much or more than anyone. I don't have cable because of ads; I don't watch broadcast TV because of ads; I block web ads entirely online; I don't use apple products because they don't let me block ads my way [via the hosts file]; I have subscriptions to Netflix and SiriusXM (and avoid the Sirius/XM channels with ads); etc. Seriously, I hate ads.
That being said, I think we as consumers need to admit to our part in how things got this way. We are fickle. We choose products not based on "truth" but based on emotion, style, popularity, etc. You might believe there is "truth", but really there isn't.
Seriously, Coke is not objectively better than Pepsi. You have your preference, I have mine. Coke cannot create an ad saying, "we fulfill all the criteria of being a cola better than Pepsi". Proper competition results in products that are functionally identical. Once you have that, all you have left to differentiate yourself, to sell yourself, is intangibles where you have to "create" the value out of nothing. So even in an ideal world, all advertising can do is create the sense in you that you want product X over product Y, for no objective reason.
If we want ads that are less annoying then we need consumers that are less swayed by them. Do you buy generics? Why not? Do you buy the staid, solid Consumer Reports-rated car or the stylish one? Do you buy the $15 wine or the $50 wine? Do you objectively rate products on quality (on your own scale) and then buy the best or one with the best quality/value ratio?
People don't buy that way so advertisers don't advertise that way. It's not all the advertisers fault.
This still doesn't mean I have to look at ads, though! And I blame all y'all for this state of affairs. Not my fault, I buy objectively. ( ;) )
The moral argument against ad-blocking fails. It's no more immoral for me to block ads than it is for a website to choose to use ad-networks the present the kinds of ads that chase me away. It's an optimization problem.
First, advertising currently works. Ad-supported websites survive. These statements are demonstrably true simply because ads and ad-supported websites exist. I feel no moral obligation to enable ads because this is true even though I block ads. So, my opting out of this obnoxious system hasn't broken anything, and thus the argument that I have some moral obligation to opt in fails. If advertisers and websites want me to put up with ads, they need to create ads I will put up with. They have not. Until they do, they don't get my eyes.
The equation for advertising is not an all or none equation. It's a statistical one. There will always be an "ad-viewing" and an "ad-blocking" group of users. Advertisers try to maximize revenue from the ad-viewing group. They have many ways to do this. If things they try chase ad-viewers into the ad-blockers group, but still raise revenue, then that's a rational thing to do. But it's not fair to blame the ad-blockers for "breaking" the ad-supported revenue models. The ad-supported sites play an counterbalancing role. If they think their ad-blocker group is too large, they can try to increase their revenue by seeking out ad networks that chase away fewer users.
If you want to lay the blame at the feet of anyone, it is not the individual user who feels their internet experience is degraded when they are forced to view ads they don't like. They have no control over the situation. Blame the advertisers and ad-supported sites. They are the ones making the choices that create the ad-blocking and ad-viewing groups. They need to start optimizing their approaches to increase the ad-viewing group. Whining and begging users to voluntarily join the ad-viewing group against their own self-interest is the weakest tactic the advertusers can use.
Kudos for Techdirt for respecting the ad-blocking group enough to make it easy for them to block ads. That's respect for your community.
Not at all. I learned how to learn and the internet puts at my fingertips loads of information from which to learn. Of course I'm searching for info on things I don't know. They're new. That's why I need to learn them.
At the same time, there are things I never plan to learn/memorize well, because I don't use them often enough, and plain searching (in the sense I think you meant it) is more than efficient enough.
In my programming job the internet has allowed me to focus very much more on problem solving than "fact hoarding". I do not need to commit to memory the details of every programming language or API I use. I do not need to memorize the implementation or details of every algorithm or mathematical theorem I need. I can operate at a higher level, solving problems. When I need details, I can find them much faster than I ever could in any reference book.
I'm so much more efficient, and so much more able to learn new skills, now than I was before the internet.
College taught me how to learn and the internet lets me learn at a rate that college could never support.
You definitely buried your lede. The argument for free speech is simple. It's your last paragraph.
Allowing anyone to decide what is and isn't acceptable speech based on it's inherent "value" is not far away from criminalizing thought and those who disagree with the government (i.e. the ones with the weapons).
You're arguing that eventually we'll have computers that think better than we do. I actually want that to happen. It's not clear to me that it will, notwithstanding all the arguments about how it is "likely, if not inevitable". But it will or it won't happen independent of what I think. So, for me that question is moot.
Questions about what to do when that happens, and how to control such computers, if they need controlling, can be interesting and worthwhile. But my original comment was directed at the "AI will be evil" camp of people.
Take for the sake of argument that super-sentience will be achieved someday. What conclusions can you draw from that? Virtually none. The "AI will be evil" people say such a thing will be like people, only MORE. And then they pick whatever characteristic they want, amplify it and turn it into whatever scary scenario they want. It's just so much of a fairy tale that it is counterproductive.
But the thing that really irks me is that all these fairy tales are being taken as credible predictions that are leading people to spend real resources today trying to prevent fairy tales from coming true. It's a big waste driven by ignorance and fear.
If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.
Lots of the coolest tech we have these days came out of AI research. (Speech recognition, robotics, automatic translations, economic algorithms, image classification, face recognition, search engines.) This "AI is evil" meme threatens to choke off the next wave of innovation.
To deny that this will happen you have to claim either:
Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.
This is a variation on your third option. It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different. Computers are our creation. We understand them to a level far beyond the level at which we understand how the brain works. So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.
We do that kind of thing all the time to protect against agents we don't trust, locks, passwords, encryption, guns, fences, walls.
The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have. It's just magical thinking.
You will not find panic in any of my statements or arguments.
No, but all these stories about AI taking over are AI panic, and they are the ones grabbing headlines. My frustration is that all these AI taking over scenarios are so unrealistic as to be simply fairy tales, yet people take them serious like they're about to happen.
It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen. Still a fairy tale.
The basic requirements of AI are "computronium" (a computing substrate to run on), and energy. The first AI's will realize this, realize the nearest-largest energy source is the sun, and will abandon earth before destroying humanity. Whew! Saved by self-interest. But wait, computronium. First they'll harvest Mercury. Then Venus, and the rest of the planets. Eventually they'll go interstellar, and harvest other planets. Then they'll discover how to make a star go supernova to produce a lot of computronium (because where else do heavy elements come from).
So someone do an exponential calculation to see how long our galaxy has before it is consumed and the AI goes intergalactic.
I do admit a statistically significant lack of a sense of humor on this topic. But some jokes end like this: I'm only joking! (And then in a stage whisper: or am I?)
The first part is almost inevitable.
Yeah, not really. But that's the argument that makes all this hogwash work. The formula is this: there's been progress, and there's been an increasing rate of progress. Ergo, ASI. ASI, ergo panic. As if, in the story about Turry, and in the article it came from, humans are reduced to mere bystanders as AI zooms past in the fast lane.
Thinking about issues humanity will probably face in the future is counterproductive?
I don't like your strawman. Lets say: Neglecting issues of real concrete immediate consequence in favor of wringing our hands over unlikely future dystopia is counterproductive. That's the scenario we're in.
Or are you arguing computers will never be really qualitatively different than they are now?
Qualitatively is subjective. But yes, if pressed, I do argue that. To give context, though, I consider today's computer technology qualitatively the same as it has been since ... whenever. But it's easy to argue that today's technology is qualitatively different than that of the 50's, 60's, 70's, 80's, or even 90's.
Anyway, whether you want to draw the qualitative line at ANI, AGI, or ASI, doesn't really matter. What does matter is that as the capabilities of AI progress, we will not be idle bystanders. We will be creating the advances, observing the advances, and can react to the advances.
Our reactions, though, need to be based on what actually happens or is actually about to happen, not based on wild assumptions about what might happen if a bunch of magic happens.
You can argue as much as you want that the trends point to the magic happening, but that's not the same as actually knowing how to make the magic happen.