by Mike Masnick
Wed, Aug 6th 2014 3:36pm
by Glyn Moody
Thu, Jan 5th 2012 12:04am
from the wheels-within-wheels dept
One of the central questions the Wikipedia community grapples with is: What exactly is Wikipedia trying to achieve? For example, does it aspire to be a total encyclopedia of everything? What is the appropriate level of detail?
As might be expected in a community made up of volunteers, feelings run high over these apparently dry questions of philosophy. Just as there are free software and open source factions that work together for a common cause, but eternally snipe at each other over details, so the Wikipedia community harbors two groups that agree to disagree on what is the proper scope for the project: the deletionists and the inclusionists. Here's what Wikipedia itself has to say on them:
"Deletionists" are proponents of selective coverage and removal of articles seen as unnecessary or highly substandard. Deletionist viewpoints are commonly motivated by a desire that Wikipedia be focused on and cover significant topics – along with the desire to place a firm cap upon proliferation of promotional use (seen as abuse of the website), trivia, and articles which are of no general interest, lack suitable source material for high quality coverage, or are too short or otherwise unacceptably poor in quality.
One particular area where the limits of inclusionism and deletionism are tested is local information. Should Wikipedia strive to provide the same level of detail about local information as it does about global facts? If so, how?
"Inclusionists" are proponents of broad retention, including retention of "harmless" articles and articles otherwise deemed substandard to allow for future improvement. Inclusionist viewpoints are commonly motivated by a desire to keep Wikipedia broad in coverage with a much lower entry barrier for topics covered – along with the belief in that it is impossible to tell what knowledge might be "useful" or productive, that content often starts poor and is improved if time is allowed, that there is effectively no incremental cost of coverage, that arbitrary lines in the sand are unhelpful and may prove divisive, and that goodwill requires avoiding arbitrary deletion of others' work. Some extend this to include allowing a wider range of sources such as notable blogs and other websites.
Maybe Wikipedia has found a way to do so without overloading the main encyclopedia: create a mini-Wikipedia devoted entirely to one location – in this case Monmouthpedia, about the Welsh town of Monmouth:
Monmouthpedia will be the first Wikipedia project to cover a whole town, creating articles on interesting and notable places, people, artifacts, flora, fauna and other things in Monmouth in as many languages as possible including Welsh.
There are a number of interesting facets to this project. The first is the direct involvement of local people. By limiting the range of the entries to one location it might prove easier to motivate new contributors – a perennial concern for the larger Wikipedia – and allow them to capture key aspects of a place they know well.
We are very keen for local people to be involved in what ever way they would like. Computer skills are not that important, it’s the interest and the willingness to be involved, suggesting and writing articles, taking and donating photos and recommending good reference materials. If you speak another language it would be a great place to practice your writing skills and learn new vocabulary and grammar. There are a lot of opportunities for community involvement including teaching and learning of I.T skills, local history, natural history, languages and people of different ages working together.
The amount, detail and quality of the information we could create is amazing. The Council for British Archaeology has designated Monmouth as the 7th best town in Britain. Knowledge gives us context, it allows us to appreciate our surroundings more, Monmouth may be first place in the world to offer its tourist information in up to 270 languages.
Monmouthpedia will use QRpedia codes, a type of bar code a smartphone can read through its camera that takes you to a Wikipedia article in your language. QR codes are extremely useful, physical signs have no way of displaying the same amount of information and in a potentially huge number of languages.
Articles will have coordinates (geotags) to allow a virtual tour of the town using the Wikipedia layer on Google Streetview, Google Maps and will be available in augmented reality software including Layar.
The use of QR codes in physical signage around the town will add a new directionality to the links between Monmouthpedia and the town it describes. Similarly, the geotags in the articles will allow text and images linked to geographical locations to be loaded automatically as people walk around with suitable apps on their smartphones. Obviously, once in place, that localized QR-coded infrastructure could also be exploited by other, quite different smartphone programs, to produce fascinating geo-informational mashups.
But perhaps the most interesting aspect of Monmouthpedia is that it creates a kind of fractal Wikipedia. That's important because if it functions well, it sets a precedent for a new, nested kind of Wikipedia whose entries can sometimes drop down a level to an entirely new Wikipedia-like resource about a specific topic. Maybe the ultimate test of Monmouthpedia's success will be when people start creating wikis about those same "places, people, artifacts, flora, fauna and other things" that will soon fill its pages – an Inception-like Wikipedia within a Wikipedia within a Wikipedia.
Update: see this comment from a Wikipedian involved with the project for some clarifications.
by Mike Masnick
Wed, Dec 14th 2011 12:27pm
from the is-this-what-we-really-want? dept
I’ve been asked for a legal opinion. And I will tell you, in my view, the new version of SOPA remains a serious threat to freedom of expression on the Internet.As I said, there's much more at the link, but this is pretty thorough and explains why SOPA, even in its changed form, is a huge threat and a bad idea -- especially if you believe in internet freedom.
- The new version continues to undermine the DMCA and federal jurisprudence that have promoted the Internet as well as cooperation between copyright holders and service providers. In doing so, SOPA creates a regime where the first step is federal litigation to block an entire site wholesale: it is a far cry from a less costly legal notice under the DMCA protocol to selectively take down specified infringing material. The crime is the link, not the copyright violation. The cost is litigation, not a simple notice.
- The expenses of such litigation could well force non-profit or low-budget sites, such as those in our free knowledge movement, to simply give up on contesting orders to remove their links. (Secs. 102(c)(3); 103(c)(2)) The international sites under attack may not have the resources to challenge extra-territorial judicial proceedings in the United States, even if the charges are false.
- Although rendering it discretionary (Secs.102(c)(2)(A-E); 103(c)(2)(A-B)), the new bill would still allow for serious security risks to our communications and national infrastructure. The bill no longer mandates DNS blocking but still allows it as an option. As Sherwin Siy, deputy legal director of Public Knowledge, explained: “The amendment continues to encourage DNS blocking and filtering, which should be concerning for Internet security experts . . . .”
by Mike Masnick
Wed, May 14th 2008 10:26am
from the is-streisand-a-mormon? dept
That situation got so much publicity, you would think that anyone would think twice about going down the same path. No such luck. Last month, Scientology threatened Wikileaks for hosting Scientology documents, and this morning (as a whole bunch of folks have sent in) news is coming out that the Mormon Church is threatening Wikileaks as well, for hosting church documents. In this case, the Mormon Church isn't just going after Wikileaks, but also threatened the WikiMedia foundation and document hosting site Scribd. It went after WikiMedia because WikiNews ran an article about the document and linked to them (which is hardly copyright infringement). Scribd was apparently hosting a copy of the documents as well (since taken down). Wikileaks, however, true to its charter, is refusing to take down the documents.
While you can understand why the Church might not like it's documents being made public, it does seem ridiculous that whoever decided to start threatening everyone didn't do the most basic research to recognize what would happen as soon as they threatened sites. Given what happened with Julius Baer, it should have been abundantly clear that threatening Wikileaks would almost guarantee that the documents were both more widely seen than before and copied widely across the internet.