Intelligence without representation

Category: Computer
Last Updated: 04 Jul 2021
Pages: 8 Views: 188
Table of contents

Introduction

Professor Rodney Brooks’ vision is to create a truly intelligent machine without the aid of representation. He argued that when intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. What is artificial intelligenceWhat is Brooks’ vision and his approach in achieving thisWhat constitute a truly intelligent machineIs Brooks’ approach effective to create machines which are truly intelligent and what obstacles foreseen in his approach?

What is Artificial Intelligence

Order custom essay Intelligence without representation with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

Artificial intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. AI is a field in computer science seeking to create a computer system capable of sensing the world around it, understanding conversations, learning, reasoning, and reaching decisions, just as would a human. AI is a combination of computer science, physiology, and philosophy.

AI is a broad topic, consisting of different fields, from machine vision to expert systems. The element that the fields of AI have in common is the creation of machines that can “think”. In order to classify machines as “thinking”, it is necessary to define intelligence. To what degree does intelligence consist of, for example, solving complex problems, or making generalizations and relationshipsResearch into the areas of learning, of language, and of sensory perception has aided scientists in building intelligent machines. One of the most challenging approaches facing experts is building systems that mimic the behaviour of the human brain, made up of billions of neurons, and arguably the most complex matter in the universe.

AI has come a long way from its early roots, driven by dedicated researchers. AI really began to intrigue researchers with the invention of the computer in 1943. In 1950, Alan Turing proposed a test for artificial intelligence in which a human being is asked to talk with an unseen conversant. The tester sends questions to the machine via teletype and reads its answers; if the subject cannot distinguish whether the conversation is being held with another human being or a machine, then the machine is deemed to have artificial intelligence. No machine has come close to passing this test, and it is unlikely that one will in the near future. Researchers, however, have made progress on specific pieces of the artificial intelligence puzzle, and some of their work has had substantial benefits.

One area of progress is the field of expert systems, or computer systems designed to reproduce the knowledge base and decision-making techniques used by experts in a given field. Such a system can train workers and assist in decision making. MYCIN, a program developed in 1976 at Stanford University, suggests possible diagnoses for patients with infectious blood diseases, proposes treatments, and explains its “reasoning” in English. Corporations have used such systems to reduce the labour costs involved in repetitive calculations. A system used by American Express since November 1988 to advise when to deny credit to a customer saves the company millions of dollars annually.

A second area of artificial intelligence research is the field of artificial perception, or computer vision. Computer vision is the ability to recognize patterns in an image and to separate objects from background as quickly as the human brain. In the 1990s military technology initially developed to analyze spy-satellite images found its way into commercial applications, including monitors for assembly lines, digital cameras, and automotive imaging systems.

Another pursuit in artificial intelligence research is natural language processing, the ability to interpret and generate human languages. In this area, as in others related to artificial intelligence research, commercial applications have been delayed as improvements in hardware—the computing power of the machines themselves—have not kept pace with the increasing complexity of software.

The field of neural networks seeks to reproduce the architecture of the brain—billions of connected nerve cells—by joining a large number of computer processors through a technique known as parallel processing. A fuzzy systems is a subset of artificial intelligence research based on the assumption that the world encountered by humans is filled with approximate rather than precise information. Interest in the field has been particularly strong in Japan, where fuzzy systems have been used in different applications, from operating subway cars to guiding the sale of securities. Some theorists argue that the technical obstacles to artificial intelligence, while large, are not attainable. A number of computer experts, philosophers and futurists have speculated on the ethical and spiritual challenges facing society when artificial intelligent machines begin to mimic human personality traits, including memory, emotion, and consciousness.

Brooks’ vision

Professor Rodney Brooks stated in his report his approach of creating artificial intelligence as:

We must incrementally build up the capabilities of intelligent systems, having complete systems at each step of the way and thus automatically ensure that the pieces and their interfaces are valid.
At each step we should build complete intelligent systems that we let loose in the real world with real sensing and real action. Anything less provides a candidate with which we can delude ourselves.

He backed his approach by claiming to have built series of autonomous mobile robots following this approach and made mention to have reached an unexpected conclusion (C) and have a rather radical hypothesis (H).

(C)When we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.

(H) Representation is the wrong unit of abstraction in building the bulkiest parts of intelligent systems.

Brooks’ Approach

Incremental Intelligence

Brooks stated in his desire to build completely autonomous mobile agents that co-exist in the world with humans and they shall be seen by human beings as intelligent beings in their own right. He called such agents Creatures. He declared this as his intellectual motivation however, immediately declared his non-particular interest in demonstrating how human beings work. Having considered the parable of the AF researchers, he convinces himself to tread carefully in the endeavour to avoid some nasty pitfalls.

He considered the problem of building these Creatures as an engineering problem. He then stated some of the requirements he needed in order to build these Creatures as:

  1. A Creature must cope appropriately and in a timely fashion with changes in its dynamic environment.
  2. A Creature should be robust with respect to its environment; minor changes in the properties of the world should not lead to total collapse of the Creature’s behaviour; rather one should expect only a gradual change in capabilities of the Creature as the environment changes more and more.
  3. A Creature should be able to maintain multiple goals and, depending on the circumstances it finds itself in, change which particular goals it is actively pursuing; thus it can both adapt to surroundings and capitalize on fortuitous circumstances.
  4. A Creature should do something in the world; it should have some purpose in being.

Having set out all the requirements he needed to build the Creature, he then considered some of the valid engineering approaches needed in achieving these requirements. He stated that it is necessary to decompose a complex system into parts, build the parts, then interface them into a complete system.

Decomposition by function

Traditionally, the notion of intelligent systems has been of a central system with perceptual modules as inputs and action modules as outputs. The perceptual modules deliver a symbolic description of the world and the action modules take a symbolic description of desired actions and make sure they happen in the world. This makes the central system a symbolic information processor.

Brooks argued that the central system be decomposed into smaller pieces. He also stressed that when researchers working on a particular module get to choose both the inputs and the outputs that specify the module requirements he believe there is little chance the work they do will fit into a complete intelligent system. He made mention of a bug in the functional decomposition approach which is hard to fix and admitted the need of a long chain of modules to connect perception to action. He rounded up decomposition by function saying these modules must be built first in order to test any of them but emphasis that until realistic modules are built it is highly unlikely that they can predict exactly what modules will be need or what interfaces the Creatures will need.

Decomposition by activity

This is an alternative decomposition aforementioned which makes no distinction between peripheral systems, such as vision and central systems. Rather the fundamental slicing up of an intelligent system is in the orthogonal direction dividing it into activity producing subsystems. Each activity or behaviour producing system individually connects sensing to action. This activity producing system is referred to as layer. An activity is said to be a pattern of interactions with the world. Another name for these activities is said to be skill. The word activity was chosen however, because the layers must decide when to act for themselves, not by some subroutine to be invoked at the beck and call of some other later.

Brooks gave an advantage of this approach as giving an incremental path from very simple systems to complex autonomous intelligent systems. He stressed the necessity of building one small piece at each step of the way and interfacing it to an existing, working, complete intelligence.

No Representation vs No Central Representation

Another Brooks’ approach is to eliminate the idea of having a central representation or central system. He said and I quote “Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviours. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behaviour. There is no central purposeful locus of control.” He claimed however, that there need be no explicit representation of either the world or the intentions of the system to generate intelligent behaviours for a Creature.

He acknowledged the fact that an extremist might say that his approach do have representations however, swiftly defended his approach by claiming that are just explicit. He differentiates his approach from the standard representation by claiming;

  1. No variables need instantiation in reasoning processes.
  2. No rules which need to be selected through pattern matching.
  3. No choice to be made.
  4. To a large extent the state of the world determines the action of the Creature.

The methodology, in practice

In order for Brooks to build systems based on activity decomposition so that they are truly robust he stated that they must follow a careful methodology.

Methodological maxims

First, it is vitally important to test the Creatures they build in the real world; i.e., in the same world that we humans inhabit. He also point out that it is disastrous to fall into the temptation of testing them in a simplified world first, even with the best intentions of later transferring activity to an unsimplified world. With a simplified world (matte painted walls, rectangular vertices everywhere, colour blocks as the only obstacles) it is very easy to accidentally build a submodule of the system which happens to rely on some of those simplified properties. This reliance can then easily be reflected in the requirements on the interfaces between that submodule and others.

Second, as each layer is built it must be tested extensively in the real world. The system must interact with the real world over extended periods. Its behaviour must be observed and be carefully and thoroughly debugged. When a second layer is added to an existing layer there are three potential sources of bugs: the first layer, the second layer, or the interaction of the two layers. Eliminating the first of this source of bugs as a possibility makes finding bugs much easier. Furthermore, there is only one thing possible to vary in order to fix the bugs—the second layer.

References

  • Brooks, R. (1991). Intelligence without representation. Artificial Intelligence 47 , 139-159.

Bibliography

  1. Kurzweil, Ray. The Age of Spiritual Machines. New York: Viking, 1999.
  2. Partridge, Derek. A New Guide to Artificial Intelligence. Norwood, N.J.: Ablex, 1991.
  3. Shapiro, Stuart C., ed. Encyclopedia of Artificial Intelligence. 2d ed. New York: Wiley, 1992.
  4. Turbam, Efraim. Expert Systems and Applied Artificial Intelligence. New York: MacMillan, 1992.
  5. http://www.answers.com/topic/artificial-intelligence

Cite this Page

Intelligence without representation. (2019, Mar 21). Retrieved from https://phdessay.com/intelligence-without-representation/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer