Intelligence without representation_免费下载

Intelligence without representation_免费下载


2024年3月10日发(作者:chrome手机版官网)

Intelligence without representation*

Rodney A. Brooks

MIT Artificial Intelligence Laboratory, 545 Technology Square, Rm. 836, Cambridge, MA 02139, USA

Received September 1987

Brooks, R.A., Intelligence without representation, Artificial Intelligence 47 (1991), 139–159.

* This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the

research is provided in part by an IBM Faculty 9 Development Award, in part by a grant from the Systems Development Foundation, in part by

the University Research Initiative under Office of Naval Research contract N00014-86-K-0685 and in part by the Advanced Research

Projects Agency under Office of Naval Research contract N00014-85-K-0124.

Abstract

Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with

strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline

our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into

independent information processing units which must interface with each other via representations. Instead, the intelligent system is

decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather

than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and

peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in

standard office environments.

1. Introduction

Artificial intelligence started as a field whose goal

was to replicate human level intelligence in a

machine.

Early hopes diminished as the magnitude and

difficulty of that goal was appreciated. Slow progress

was made over the next 25 years in demonstrating

isolated aspects of intelligence. Recent work has

tended to concentrate on commercializable aspects of

"intelligent assistants" for human workers.

No one talks about replicating the full gamut of

human intelligence any more. Instead we see a retreat

into specialized subproblems, such as ways to

represent knowledge, natural language understanding,

vision or even more specialized areas such as truth

maintenance systems or plan verification. All the

work

in

these subareas is benchmarked against the

sorts of tasks humans do within those areas.

Amongst the dreamers still in the field of AI (those

not dreaming about dollars, that is), there is a feeling.

that one day all these pieces will all fall into place

and we will see "truly" intelligent systems emerge.

However, I, and others, believe that human level

intelligence is too complex and little understood to be

correctly decomposed into the right subpieces at the

moment and that even if we knew the subpieces we

still wouldn't know the right interfaces between

them. Furthermore, we will never understand how to

decompose human level intelligence until we've had a

lot of practice with simpler level intelligences.

In this paper I therefore argue for a different

approach to creating artificial intelligence:

• We must incrementally build up the capabilities of

intelligent systems, having complete systems at

each step of the way and thus automatically ensure

that the pieces and their interfaces are valid.

• At each step we should build complete intelligent

systems that we let loose in the real world with real

sensing and real action. Anything less provides a

candidate with which we can delude ourselves.

We have been following this approach and have built

a series of autonomous mobile robots. We have

reached an unexpected conclusion (C) and have a

rather radical hypothesis (H).

(C)When we examine very simple level intelligence

we find that explicit representations and models

of the world simply get in the way. It turns out

to be better to use the world as its own model.

(H)Representation is the wrong unit of abstraction

in building the bulkiest parts of intelligent

systems.

Representation has been the central issue in artificial

intelligence work over the last 15 years only because

it has provided an interface between otherwise isolated

modules and conference papers.

2. The evolution of intelligence

We already have an existence proof of, the

possibility of intelligent entities: human beings.

Additionally, many animals are intelligent to some

degree. (This is a subject of intense debate, much of

which really centers around a definition of

intelligence.) They have evolved over the 4.6 billion

year history of the earth.

It is instructive to reflect on the way in which

earth-based biological evolution spent its time.

Single-cell entities arose out of the primordial soup

roughly 3.5 billion years ago. A billion years passed

before photosynthetic plants appeared. After almost

another billion and a half years, around 550 million

years ago, the first fish and Vertebrates arrived, and

then insects 450 million years ago. Then things

started moving fast. Reptiles arrived 370 million

years ago, followed by dinosaurs at 330 and

mammals at 250 million years ago. The first

primates appeared 120 million years ago and the

immediate predecessors to the great apes a mere 18

million years ago. Man arrived in roughly his present

form 2.5 million years ago. He invented agriculture a

mere 10,000 years ago, writing less than 5000 years

ago and "expert" knowledge only over the last few

hundred years,

This suggests that problem solving behavior,

language, expert knowledge and application, and

reason, are all pretty simple once the essence of being

and reacting are available. That essence is the ability

to move around in a dynamic environment, sensing

the surroundings to a degree sufficient to achieve the

necessary maintenance of life and reproduction. This

part of intelligence is where evolution has

concentrated its time—it is much harder.

I believe that mobility, acute vision and the ability

to carry out survivalrelated tasks in a dynamic

environment provide a necessary basis for the

development of true intelligence. Moravec [11] argues

this same case rather eloquently.

Human level intelligence has provided us with an

existence proof but we must be careful about what

the lessons are to be gained from it.

2. 1. A story

Suppose it is the 1890s. Artificial flight is the

glamor subject in science, engineering, and venture

capital circles. A bunch of AF researchers are

miraculously transported by a time machine to the

1980s for a few hours. They spend the whole time in

the passenger cabin of a commercial passenger

Boeing 747 on a medium duration flight.

Returned to the 1890s they feel vigorated, knowing

that AF is possible on a grand scale. They

immediately set to work duplicating what they have

seen. They make great progress in designing pitched

seats, double pane windows, and know that if only

they can figure out those weird "plastics" they will

have their grail within their grasp. (A few

connectionists amongst them caught a glimpse of an

engine with its cover off and they are preoccupied

with inspirations from that experience.)

3. Abstraction as a dangerous weapon

Artificial intelligence researchers are fond of pointing

out that AI is often denied its rightful successes. The

popular story goes that when nobody has any good

idea of how to solve a particular sort of problem (e.g.

playing chess) it is known as an AI problem. When

an algorithm developed by AI researchers successfully

tackles such a problem, however, AI detractors claim

that since the problem was solvable by an algorithm,

it wasn't really an AI problem after all. Thus AI

never has any successes. But have you ever heard of

an AI failure?

I claim that AI researchers are guilty of the same

(self) deception. They partition the problems they

work on into two components. The AI component,

which they solve, and the non-AI component which,

they don't solve. Typically, AI "succeeds" by defining

the parts of the problem that are unsolved as not AI.

The principal mechanism for this partitioning is

abstraction. Its application is usually considered part

of good science, not, as it is in fact used in AI, as a

mechanism for self-delusion. In AI, abstraction is

usually used to factor out all aspects of perception

and motor skills. I argue below that these are the hard

problems solved by intelligent systems, and further

that the shape of solutions to these problems

constrains greatly the correct solutions of the small

pieces of intelligence which remain.

Early work in AI concentrated on games,

geometrical problems, symbolic algebra, theorem

proving, and other formal systems (e.g. [6, 9]). In

each case the semantics of the domains were fairly

simple.

In the late sixties and early seventies the blocks

world became a popular domain for AI research. It had

a uniform and simple semantics. The key to success

was to represent the state of the world completely and

explicitly. Search techniques could then be used for

planning within this well-understood world. Learning

could also be done within the blocks world; there

were only a few simple concepts worth learning and

they could be captured by enumerating the set of

subexpressions which must be contained in any

formal description of a world including an instance of

the concept. The blocks world was even used for

vision research and mobile robotics, as it provided

strong constraints on the perceptual processing

necessary [12].


发布者:admin,转转请注明出处:http://www.yc00.com/xitong/1710013614a1686121.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信