The Jimmy Wales Strategy: Just Add Wiki (This Time To Search Engines)

from the one-way-to-do-things dept

It seems that the talk of the tech world over the weekend came from a Times Online article about how Wikipedia founder was preparing a wiki-based search engine, called Wikiasari. The project aims to create a better search engine, using open source search engine projects Nutch and Lucene, which have been around for a while, but not much has been done with either. The Times Online piece speculated about Amazon's involvement in the project, which would have been interesting given their flopped attempt at a search engine in A9. However, Wales later denied that Amazon had anything to do with the project. Update: Folks from Wikimedia have chimed in to clarify that this project is actually not called Wikiasari and to make it clear that this is a Wikia project, not a Wikimedia project.

Either way, it's got plenty of people thinking about whether or not this kind of "user-generated" search engine has a chance to reshape the space. It's an intriguing idea for a variety of reasons -- but one that has some tremendous hurdles. One of the problems in the search engine space (mainly Google) world these days is that of search engine spam. While there certainly are some legitimate "search engine optimization" attempts out there, there are an awful lot of questionable sites who play all sorts of tricks to get listed higher, knowing that the almighty Google drives traffic and traffic is the key to making money online. At a first pass, you would then think that a wiki-based solution would be a problem -- since it makes it even easier to spam (or vandalize, as the Wikipedians would say). However, there is one major difference: each of the changes is recorded and can be watched. In other words, each of the attempts to spam such a system would be a lot more transparent and noticeable and correctable. Now, of course, as with Wikipedia, some of the mistakes will get through, and that may cause many to automatically assume (incorrectly) that the whole concept is busted. Still, there's a big difference between having a system that can create user-generated results and one that actually can do it better than Google or Yahoo. Getting that far is a really big challenge. It also may be worth noting that perhaps comparing this to Google is the wrong way to go about things. It may be more accurate to see it as a much, much, much more advanced version of the old Open Directory Project -- which is why it's interesting to hear that Jimmy Wales is also interested in getting his hands on what's left of the AOL-neglected Open Directory Project.


Reader Comments (rss)

(Flattened / Threaded)

  •  
    identicon
    Nick Burns, Dec 26th, 2006 @ 7:37am

    why wiki, why?

    Google operates under a wonderful concept not to tamper manually with their search results. Every day, though, they are making their search results better by improving the Google Search Algorithms. A wiki-based search will only be more costly to maintain and much slower to update in a web where pages change constantly.

    First

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    PhysicsGuy, Dec 26th, 2006 @ 7:53am

    also, automated computer process are handy for doing things like organizing billions of web pages... i think this is a good idea. however, i don't think this will ever replace search engines like google and yahoo...

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    MissingFrame, Dec 26th, 2006 @ 8:15am

    If you haven't found how broken Google is ...

    ... you haven't really used it. I do a lot of searches that require a reiterative process and checking over 100 pages to find the right one. Fine for me who's been using search engines before Google even existed, but not everyone is that patient.

    Is Wiki the answer? For page-ranking alone, I doubt it. But if you are looking for that restaurant down the street who has an unlinked web page, it might be the partial answer.

     

    reply to this | link to this | view in chronology ]

  •  
    icon
    Gregory Kohs (profile), Dec 26th, 2006 @ 8:26am

    Isn't this DMOZ redux?

    Didn't they already try this with DMOZ? I guess if you just wait 12 years, you can repackage anything to venture capital investors.

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      Daniel (profile), Jan 3rd, 2007 @ 3:38am

      Re: Isn't this DMOZ redux?

      DMOZ isn't a wiki. You can't add a site to DMOZ directly. You have to apply to have it added. you CAN apply to be an editor, but you can get rejected. with a wiki, if I want a site added, I just add it and hope someone else doesn't deface it.

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    Monarch, Dec 26th, 2006 @ 9:41am

    This would be great if you could separate the sites based on informational, link sites, and vendor/merchant sites.
    Why?
    Because I am so sick of doing a search for a topic, and then having to weed through 90% crappy Merchandise/Store sites, and linking redirect sites before finding the 10% of results, that actually have the content I'm looking for.

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    aReader, Dec 26th, 2006 @ 1:07pm

    or may be....

    Wikipedia is a potential target for Google's acquisition spree.

     

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Save me a cookie
  • Note: A CRLF will be replaced by a break tag (<br>), all other allowable HTML will remain intact
  • Allowed HTML Tags: <b> <i> <a> <em> <br> <strong> <blockquote> <hr> <tt>
Follow Techdirt
A word from our sponsors...
Essential Reading
Techdirt Reading List
Techdirt Insider Chat
A word from our sponsors...
Recent Stories
A word from our sponsors...

Close

Email This