2014 Paris Motor Show highlights

Top notch AI system about as smart as a four-year-old, lacks commonsense

By

July 15, 2013

Researchers have found that an AI system has an average IQ of a four-year-old child (Image...

Researchers have found that an AI system has an average IQ of a four-year-old child (Image: Shutterstock)

Those who saw IBM’s Watson defeat former winners on Jeopardy! in 2011 might be forgiven for thinking that artificially intelligent computer systems are a lot brighter than they are. While Watson was able to cope with the highly stylized questions posed during the quiz, AI systems are still left wanting when it comes to commonsense. This was one of the factors that led researchers to find that one of the best available AI systems has the average IQ of a four-year-old.

To see just how intelligent AI systems are, a team of artificial and natural knowledge researchers at the University of Illinois as Chicago (UIC) subjected ConceptNet 4 to the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, which is a standard IQ test for young children. ConceptNet 4 is an AI system developed at MIT that relies on a commonsense knowledge base created from facts contributed by thousands of people across the Web.

While the UIC researchers found that ConceptNet 4 is on average about as smart as a four-year-old child, the system performed much better at some portions of the test than others. While it did well on vocabulary and in recognizing similarities, its overall score was brought down dramatically by a bad result in comprehension, or commonsense “why” questions.

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. “We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of eight.”

Sloan says AI systems struggle with commonsense because it relies not only on a large collection of facts, which computers can access easily through a database, but also on obvious things that we don’t even know we know – things that Sloan calls “implicit facts.” For example, a computer may know that water freezes at 32° F (0° C), but it won’t necessarily know that it is cold, which is something that even a four-year-old child will know.

“All of us know a huge number of things,” says Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.”

Sloan and his colleagues hope their study will hope identify areas for AI research to focus on to improve the intelligence of AI systems. They will present their study on July 17 at the US Artificial Intelligence Conference in Bellevue, Washington.

Source: UIC

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
11 Comments

It's impossible to create an AI...one must develop an AI and give it the same years to learn and explore the world as a human if we expect it to become a truly autonomous learning being.

Children are progressively allowed more freedoms as they learn and gain experience, AI's probably won't experience that same freedom to explore and be 'off the leash' as humans are simply out of fear.

We are nowhere near ready to create a true AI as far as hardware goes, physical autonomy is just not practical yet. It is coming though and very rapidly. We lack energy production and power storage technology, by the time individual homes are powered internally without outside resources we should have the technical ability in other areas to create what we envision as AI.

We need another 200 years, not just for hardware, but for humanity to move past our current reliance on economy, societal issues, and capitalism. Those three are very much in the way of our progress. When education is freely available to to even the poorest, regardless of ability to earn good grades, or pay for it, we will see the most amazing advancements in our history...and they will come from the most unlikely sources we can imagine today.

Even now our brightest minds are working to develop products for corporations to maintain our reliance on an economic system that hinders advancement, we have scientists without funding, engineers without creative license, designers and thinkers selling trinkets and advertising instead of changing the world.

I read science fiction and see in that fiction worlds where everyone eats without economic impact, where healthcare is without cost, where people are free to follow dreams and work on what they have a passion for...it's not real, but could be, it's not possible in how we think and operate in our world, but it could be, sure it's fantasy and i know we can't simply change the world in one or even five lifetimes, but it could be done. I see so much potential for humanity, and then see poor children living in slums with no hope of an education and mourn the loss of that child's potential.

John Hemingway Parkes
16th July, 2013 @ 04:51 am PDT

in 2012 IBM estimated that a human brain can process 36.8 petaflops of data. The fastest super computer in the world at the time of that estimate was 16.32 petaflops. The top super computer this year on the top500 list form June can process 33.86 petaflops/s of data.

That is far ahead of the #2 system but are mostly on the cusp of having a super computer that surpasses the brain in data processing. That is not to say that programming human intelligence into a super computer with over 3 million processors is any trivial task but we are pretty close to what will be a huge milestone on the way to victory for the machines.

Daishi
16th July, 2013 @ 11:34 am PDT

It's more like it lacks any type of actual human intelligence.

If it's set up like Watson, it's basically a search engine. Enter a question, get an answer from the top of the list of search items found.

Tell it "build me a tower of blocks" and you'll wait forever, because it's no more capable of doing that than your washing machine is.

Then again, in the real world, anthropomorphism isn't a very good measure of the usefulness of an AI system. Watson does what it is designed to do very well.

Jon A.
16th July, 2013 @ 02:35 pm PDT

You don't just create AI for the sake of AI, you create it to achieve a task... the more dynamic the task, the more complex the AI. Even our own brain, it is a fact that you isolate it out of the world.. it withers and dies.. it need tasks, and it's what drives the intelligence to develop. (it is the complex hand that needed a large brain to work it). As we create machines to function in more dynamic and complex tasks, we will develop the software to deal with these tasks and they will get 'smarter'. Each lesson learned will then be applied into other sysstem and so on. The old 'brain in a box' of sci fi is of no purpose... what will drive AI is setting machines off in a dynamic environmnet and having to cope with the environmnet to achieve a goal. As for John's comments above.. while I agree, I have to say it's a bit off topic.

SciFi9000
16th July, 2013 @ 05:11 pm PDT

Oh but AI for the sake of AI is indeed the goal of many. But what would a machine with AI want? Would it too become ego-centric, wanting to ensure it's own survival above all else? Of course, this is the very nature of intelligence. Altruism is an ill-defined character poorly demonstrated anywhere yet on earth. If we were able to program our societal structures around such compassionate values, we would then be ready to truly endorse an advancement of AI systems that would by their nature have access to our most destructive inventions. But if society was smart enough to be truly compassionate, would we then need AI, or ust quick access to real facts?

Ricky Hall
16th July, 2013 @ 07:33 pm PDT

The architecture has to be changed to support human/animal like associative memory.

Learning is a collection of events with inter-dependencies to other events, sorted and categorized between each other in hierarchies governed by external factors (pain, reward, curiosity) as well as past outcomes.

What you really need is a hugely wide bus (2048-4096 bit), with lots of memory and thousands of small analogue logic cores with 12-16 bit A/Ds for their interface to the bus. Each small core needs to hold a few kilobytes of memory, and only need operate at a few MHz. Their internal states and i/o tendencies are in a state of flux dependent on neighboring cores. A specific master core (transaction ASIC) would use the ultra wide buss to simulate a hyper-connected matrix where the analogue cores believe they are in a many to many topology.

Probably best achieved with an array of modified FPGAs. Unfortunately the FPGAs would have to somehow be modified so that each cause effect event is written to an externally attached core.

The only issue is off the shelf FPGAs cannot be accessed when being written to.

Back in earlier years I suggested that the best approach would have been a cascaded FPGA where a master unit would handle connection between sub-units that in themselves would be capable of re-writing a third layer. In this way the "brain" layer would not see the other levels of abstraction, thereby allowing topology changes to take place transparently. This kind of architecture can in principle be flattened in a 3D matrix of interconnecting mesh, where each interconnect is a small analogue processor, fast A/D and small bit of volatile memory buffer to hold last state. This architecture has its drawbacks, but could achieve a limited version of above on a die with only power and external inputs/stimuli for interface.

Nairda
16th July, 2013 @ 10:36 pm PDT

Common sense requires a viewpoint, a personal referent - an aspect of consciousness. Without such self-awareness, there is no place to "stand" to see how things relate to you, and all you will get is a fancy search engine.

I suggest you read Doug Hofstadter's "I Am A Strange Loop" to appreciate the role that recursion and self-reference play in a true AI.

Or you can read my SF work, "Pa'an", due out shortly.

Ken Brody
17th July, 2013 @ 11:15 am PDT

It sounds like me at age four. (I have Asperger's Syndrome.) Should someone tell the researchers that they have invented the world's first non-human Aspie?

Kate Gladstone
17th July, 2013 @ 11:45 am PDT

Well first of all...this is to be expected.Its early on in the game.As the person mentioned above the more dynamic the task-the more the AI will develop.We are in an exponential phase.yes in 2013 its going to be rudimentary..give it 15 years with the rate of exponential increase,we will have something more reasonable then

Mike Brown
18th July, 2013 @ 11:46 am PDT

Or less "reasonable". Do you watch much Sci-fi?

Routy
18th July, 2013 @ 03:10 pm PDT

Great discussion. I am Program Director at IBM's Watson Solutions group. I especially agree with Jon A's comment about fit-for-purpose design. For example, the fact that Watson is quite good at helping oncologists make more informed, evidence-based decisions in treating cancer patients while at the same time being quite bad at doing the same for diabetes or describing the lineage of influence among Impressionist-era painters or a host of any number of other things is not a bad thing. It just means that nobody has assembled the relevant data corpus and taken the time to conduct training. As Mr. Parkes implies, learning systems don't do useful things until they've learned. And that learning evolves one step at a time.

Michael Holmes
24th July, 2013 @ 06:24 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,960 articles