Let's Hook HAL Up To Wikipedia

from the skynet-is-active dept

Giving artificial intelligence systems a database of factual wisdom to pull from while “thinking” has always been a problem, with some researchers feeding systems for decades in order to build functional knowledge repositories. A new project takes a shortcut, and uses Wikipedia as a foundation for AI system world knowledge — in order to help the system “think smarter,” and make common sense and broad-based connections between topics. While existing search systems and e-mail filters fake intelligence through statistical analysis of word frequencies, researchers in Israel are trying to build systems that can use vast pools of Wikipedia information to glean meaning and filter accordingly. We’ve discussed how cramming a system full of knowledge isn’t always the answer — instead what’s important is the ability to parse out useful information from garbage. It will be the system’s ability to do the latter that will obviously determine success. Early applications include intelligent spam filters, but the researchers behind the patent-pending project hint they’re planning to market the system to the intelligence community as well. Parsing out meaning will be one thing — sorting through pages of inane Wikipedia wars may require a few more decades of AI development.


Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Let's Hook HAL Up To Wikipedia”

Subscribe: RSS Leave a comment
44 Comments
Jo Mamma says:

Re: Re:

“Well, I for one welcome our new Wikipedia overlords”

LOL! Sheesh, this article and responses have enough uber-nerd references to be a /. article!

Anyway, I can’t wait for true AI. I want to ask a question, and get a knowledgeable response, as though I was speaking to a human expert on the subject, ALA ‘the butler’ algorithm in Snow Crash or Aristotle in Sunstorm (speaking of uber-nerd references).

Though I’ve never programmed anything AI related, I’d think that the amount of “knowledge” is immaterial. We not only have Wikipedia for that, we’ve got the entire ‘net. The “neural net” (or whatever) and how it deals with the processing and organization of the data is what matters.

JC says:

Re: Re:

Please. This makes perfect sense. Assuming the end goal is true intelligence, as in comparable to human intelligence, then AI needs to be able to distinguish the difference between bogus and truthful just as you and I do.

The only way to teach is to let it find the truth for itself. A Wiki is perfect for this job.

I think this is an innovative approach and whether it works or not in the long run it will likely result in a set of really great data which will put us one step closer to the end goal.

I find it funny that those of you on this forum who fancy yourselves techies are so ignorant that you’d scoff at such an innovative approach to an extremely technical effort we have yet to accomplish.

Who are you fools to piss on it? If you’re all so damn smart, how come you haven’t come up with a solution to creating AI?

Seems to me most of you just like the sound of your own ignorance.

misanthropic humanist says:

Re: Re: Re:

” Assuming the end goal is true intelligence, as in comparable to human intelligence, then AI needs to be able to distinguish the difference between bogus and truthful just as you and I do.”

It is not JC. There are many goals to AI, some of which are incompatible. One important goal of AI is to have *better than human* intelligence. Humans are notoriously poor at distinguishing truth from falsehood, for many good evloutionary and sociological reasons. If you wish to consider only absolute “truth” then study automatic theorem proving programs which compute statements of predicate calculus. That was all looking peachy until Russell and Godel came along and threw a spanner in the machine.

“I think this is an innovative approach”

Because you are unfamiliar with the history of AI research, particulary those projects of MIT and Stanford that already tried this on large textural data sets. It is possibly an improvement because it constrains the data set somewhat. A novel source of data input does not make an AI. At best it offers an interesting way to search the knowledge base within the Wiki.

“If you’re all so damn smart, how come you haven’t come up with a solution to creating AI?

You fail to understand the complexity of this problem domain.

ANONYMOUS COWARDS says:

Re: Re: Re: Re:

A novel source of data input does not make an AI. At best it offers an interesting way to search the knowledge base within the Wiki.

i’d guess the most interesting asset here is really the potential interface between these two inert data sets, the text and the linkages listed. that interface will find out what it really finds interesting by the way of enticing web3.0alpha users into, well, using it. for some that means actually building it. so the construction process is very HI driven, by a collective of teamworkers.

even if we now say that little can come of it, little do we know about any novelty that takes over eye and mindshare every now and then. but to me the interface is really interesting.

Brad M says:

imagine the possibilities

I think this sounds great. Brains may not be going extinct but imagine how useful a person would be if they new 25% of the information on wikipedia. Now just imagine the possibilities of real AI.

The second piece to all of this is getting people comfortable with the idea that a computer is more intelligent then any person.

I don’t think I’m ready to go see an AI robot doctor. When it comes down to it, I’ll stick with real intelligence.

Reed says:

Re: how about a robotic surgeon?

Even if it is controlled by a human is still looks pretty scary.

At the big Japanese robotic show they showed off a cheap robot that could feed people (Only around 3k in price).

They also developed a robotic harbor seal that is used to provide social interaction in nursing homes. This particular robot has been widely successful so far.

So how long until your nurse is a robot? Probably closer than we may think.

Is our post-commercial world ready for what little labor that still exist to be replaced by robots?

rstr5105 says:

What is intelligence...?

AFAIK, the definition of “True” artificial intelligence is a machine that meets the following requirements:

1) It can fool a human that “talks” to it into thinking that they are talking to another human.

2) It “thinks” for itself, monitoring it’s environment, and making “real-life” decisions about it. Not just, “If it’s cold in the room, I’ll turn on the heat” but, deeper than that, instead of just saying “It’s cold” it’s supposed to “wonder” why it’s cold.

and finally the most important.

3) Self-awareness. The machine must be aware of it’s existence, and like humans, question that.

Now, cramming a machine full of knowledge doesn’t fulfill the requirement listed above, not only that, but I didn’t see anything about these in the article above. So in essence all the isreali’s are doing is building a “better” database search alogrithm(sp?), and a “better” search database. Not AI.

Sry guys. Better luck next time.

misanthropic humanist says:

Re: What is intelligence...?

Those three points (taken from Searl btw) are not neccesary conditions of AI, they are categorical distinctions of AI types, which are, Turing test, symbolic reasoning and sentience.

i) Turing test

Turns out to extrememly easy (relative to the other two) to “pass” this. Humans are so vain and shallow that all you have to do in order to convince them that they are having an intelligent conversation is take their own words, change them a bit and feed them back to them. You can do this in a few hundred lines of code. Many people will happily chat with Eliza-like chat bots for ages before getting bored with their own reflection.

ii) Symbolic reasoning

This is the step that is currently challenging to us. Contrary to the TFA summary and some other comments, parsing is not the difficult step. The ability to deconstruct well written language and extract meaning and intention is quite possible, although borderline syntactic constructs are notoriously difficult, ie “The boat floated down the river sank” The difficult step is to represent this and perform useful operations akin to human-like reasoning upon it. Blocks-world type limited reasoning is possible – the domain must be kept very small. This is the domain of expert systems and knowledge based systems which have been around since the 1950s.

iii) Sentience

This is the big one. This is what most people understand by intelligence. There is not even any common sense definition of it let alone some hint of a plan to achieve it. Many hypothesise the key to this is deep recursion and self reference, others assert that consciousness is necessarily like schizophrenia and requires at least two computers each mirroring the other (as in left-right brain). Some think it merely a matter of critical complexity. Noone really knows and probably will not within our lifetimes.

Good introductory texts to read (imho, – these were all second year AI texts I read in the 1980s) on this subject are Churchland, Searl, Minsky, Hoffstadter, Dennet. Another interesting book is Penrose, which deals specifically with sentience. I recommend Jackson on expert systems and Bishop on neural nets.

It’s worth noting that this comes from an Israeli research company. Many Israelis are notorious fantatsists and masters of talking up bullshit research projects to attract money. In recent years they have “invented” force-fields, time-machines, God knows what else. I always pronounce it “Isn’t Really Research”

So this is just a KBS using Wiki as input . Stanford did this back in the early 1990s with a smaller database of human knowledge and basically concluded the idea was dumb. It is more likely to fail with a larger database, not less because the problem is permuatative complexity. The only way to achieve any semblence of AI is to use neural topologies that collapse the dimensions of the data set into something managable, but these take a long time to train.

Afro beast says:

done before

I think some type of thing very similar to this is already done with the encarta…. so it would be useless having another one with wikipedia!!

Plus..its not really AI ,,its just bullshit..as usual its gonna have tons and tons of bugs… so wts the big use here anyway???? Are humans losing their brains and they cant use “SEARCH” and “READ” anymore??!!

…And its also a big fat waste of money…

PhysicsGuy says:

Re:

If you’re all so damn smart, how come you haven’t come up with a solution to creating AI?

actually, there are many, MANY forms of artificial intelligence out there. one of the most common is that within video games. sure, bots are a simple form of ai, but it’s still ai none the less. the person who gave 3 defining characteristics to ai is a moron. ai is a broad term, and i think the big problem is that people don’t understand how intelligence is defined.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...