Intelligent Software Agents Theory

8 214 0
Intelligent Software Agents Theory

Đang tải... (xem toàn văn)

Thông tin tài liệu

2 Intelligent Software Agents Theory 2.1 Introduction Intelligent software agents are a popular research object these days in such fields as psychology, sociology and computer science. Agents are most intensely studied in the discipline of Artificial Intelligence (AI) 1 . Strangely enough, it seems like the question what exactly an agent is, has only very recently been addressed seriously. "It is in our best interests, as pioneers of this technology, to stratify the technology in such a way that it is readily marketable to consumers. If we utterly confuse consumers about what agent technology is (as is the case today) then we'll have a hard time fully developing the market potential." J. Williams on the Software Agents Mailing List 2 Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are. At this moment, there is every appearance that there are more definitions than there are working examples of systems that could be called agent-based. Agent producers that make unjust use of the term agent to designate their product, cause users to draw the conclusion that agent technology as a whole has not much to offer. That is - obviously - a worrying development: "In order to survive for the agent, there must be something that really distinguishes agents from other programs, otherwise agents will fail. Researchers, the public and companies will no longer accept things that are called agent and the market for agents will be very small or even not exist." Wijnand van de Calseyde on the Software Agents Mailing List On the other hand, the description of agent capabilities should not be too rose-coloured either. Not everybody is that thrilled about agents. Especially from the field of computer science, a point of criticism often heard about agents is that they are not a new technique really, and that anything that can be done with agents "can just as well be done in C". 3 According to these critics, agents are nothing but the latest hype. The main points of criticism can be summarised as follows: 1 For general information about AI, see this WWW page: http://wombat.doc.ic.ac.uk/?AI 2 This is a discussion list (using e-mail as the means of communication) about the subject of Software Agents. The list is used and read by both users as well as developers of such agents. For more information see http://www.ee.mcgill.ca/~belmarc/agent_faq.html. 3 C is a structured programming language developed by Dennis Ritchie at Bell Laboratories in 1972. C is a compiled language that contains a small set of built-in functions that are machine dependent. The rest of the C functions are machine independent and are contained in libraries that can be accessed from C programs. 1* Mainstream AI research (expert systems, neural networks) is not as successful as many people had hoped and the new paradigm of agents is the way to escape; 2* Everything that has the label "agent" sells (this also counts in research). Like the words 'plus', 'super' and 'turbo', the term 'agent' sounds very attractive, even when most people do not know the exact meaning of 'plus', 'super', 'turbo' or 'agent'. Agents are nothing more but old wine in new bottles; 3* Because of the fact that in most cases current software agents have neither a very sophisticated nor a very complicated architecture, some wonder what qualifies them as "intelligent". 4 Particularly by researchers in the field of AI, these points of criticism are refuted with the following arguments: 4* What distinguishes multi-agent architectures from other architectures is that they provide acceptable solutions to certain problems at an affordable price. These are the kind of problems that cannot not be solved with available resources in reasonable time with monolithic knowledge based systems. 5 An example of this can be found in the field of integrated decision making, where systems are built where a single final diagnose is based on the diagnoses of individual worker agents. Moreover, there are some problems in the field of AI that cannot be solved satisfactorily unless a multi-agent architecture (i.e. an architecture where independent agents are working together to accomplish all kinds of tasks) is used; 5* Agents make it possible to eradicate the differences between the different kinds of networks (WAN, LAN 6 , Internet) and to make the borders between them 'disappear'. Some researchers like to take this one step further by playing with the notion of agents that supersede AI. 7 The response of (particularly) these researchers to the pronouncement quoted earlier, that what agents can do "can just as well be done in C", can be summarised in the following points: 1* It does not matter what the underlying technique of agents is. Whether that is a C program, a Perl script, or a LISP program: what it all boils down to is what the agent is and is not capable of doing. Or to be more precise: whether the agent is capable of 4 Unfortunately that question opens up the old AI can-of-worms about definitions of intelligence. E.g., does an intelligent entity necessarily have to possess emotions, self-awareness, etcetera, or is it sufficient that it performs tasks for which we currently do not possess algorithmic solutions? 5 The 'opposite' can be said as well: in many cases the individual agents of a system aren't that intelligent at all, but the combination and co-operation of them leads to the intelligence and smartness of an agent system. 6 LAN stands for Local Area Network (as opposed to a WAN: a Wide Area Network). A LAN is a group of computers and other devices dispersed over a relatively limited area and connected by a communications link that enables any device to interact with any other on the network. LANs commonly include microcomputers and shared (often expensive) resources such as laser printers and large hard disks. Most (modern) LANs can support a wide variety of computers and other devices. 7 These researchers see a paradigm shift from those who build intelligent systems and consequently grapple with problems of knowledge representation and acquisition, to those who build distributed, not particularly, intelligent systems, and hope that intelligence will emerge in some sort of Gestalt fashion. The knowledge acquisition problem gets solved by being declared to be a 'non-problem'. displaying intelligent behaviour. And whether the basis for that behaviour is a C program, or whatever other programming language or technique, does not really matter; 2* It does not always apply that everything that can be done by multiple co-operative agents may "just as well be done in C" (not even in the object oriented variant of that programming language). There are several tasks and problems for which there is scientific proof that they cannot be accomplished or solved by one single program or person. These kind of problems call for a distribution of the task or problem over multiple entities (i.e. a multi-agent architecture) because this will lead to a solution in a much shorter time, and quite often to a solution of a higher quality because it is the result of a subtle combination of the partial results of each individual entity. The 'pros' and 'cons' with regards to agents as they are mentioned here, are by no means complete, and should be seen as merely an illustration of the general discussions about agents. What it does show is why it is necessary (in several respects) to have a definition of the concept "intelligent software agent" that is as clear and as precise as possible. It also shows that there is probably a long way to go before we arrive at such a definition - if we can come to such a definition at all. 2.2 Definition "An agent is a software thing that knows how to do things that you could probably do yourself if you had the time." Ted Selker of the IBM Almaden Research Centre (quote taken from [JANC95]) In this section we will not come to a rock-solid formal definition of the concept "agent". Given the multiplicity of roles agents can play, this is quite impossible and even very impractical. On the Software Agents Mailing List, however, a possible informal definition of an intelligent software agent was given: "A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result." (with thanks to G.W. Lecky-Thompson for this definition) Instead of the formal definition, a list of general characteristics of agents will be given. Together these characteristics give a global impression of what an agent "is". 8 The first group of characteristics, which will be presented in section 2.2.1, are connected to the weak notion of the concept "agent". The fact that an agent should possess most, if not all of these characteristics, is something that most scientists have agreed upon at this moment. 8 See [WOOL95] for a more elaborated overview of the theoretical and practical aspects of agents. This is not the case, however, with the second group of characteristics, which are connected to the strong notion of the concept "agent". The characteristics that are presented in section 2.2.2 are not things that go without saying for everybody. What "intelligence" is, and what the related term "agency" means, is explained in section 2.2.3. 2.2.1 The weak notion of the concept "agent" Perhaps the most general way in which the term agent is used, is to denote a hardware or (more usually) software-based computer system that enjoys the following properties: 6* autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state; 9 7* social ability: agents interact with other agents and (possibly) humans via some kind of agent communication language; 10 8* reactivity: agents perceive their environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it 11 . This may entail that an agent spends most of its time in a kind of sleep state 12 from which it will awake if certain changes in its environment (like the arrival of new e-mail) give rise to it; 9* proactivity: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative; 10*temporal continuity: agents are continuously running processes (either running active in the foreground or sleeping/passive in the background), not once-only computations or scripts that map a single input to a single output and then terminate; 11* goal orientedness: an agent is capable of handling complex, high-level tasks. The decision how such a task is best split up in smaller sub-tasks, and in which order and in which way these sub-tasks should be best performed, should be made by the agent itself. Thus, a simple way of conceptualising an agent is as a kind of UNIX-like software process 13 , that exhibits the properties listed above. A clear example of an agent that meets the weak 9 See: Casterfranchi, C (1995). Guarantees for autonomy in cognitive agent architecture. In Woolridge, M. and Jennings, N. R., ed., Intelligent Agents: Theories, Architectures, and Languages (LNAI Volume 890), page 56-70. Springer-Verlag: Heidelberg, Germany. 10 See: Genesereth, M. R. and Ketchpel, S. P. (1994). Software Agents. Communications of the ACM, 37(7): page 48-53. 11 Note that the kind of reactivity that is displayed by agents, is beyond that of so-called (UNIX) daemons. Daemons are system processes that continuously monitor system resources and activities, and become active once certain conditions (e.g. thresholds) are met. As opposed to agents, daemons react in a very straight- forward way, and they do not get better in reacting to certain conditions. 12 Analogous to the "sleep" state in a UNIX system (see the next footnote): a process that has no further tasks to be done, or has to wait for another process to finish, goes into a sleep state until another process wakes it up again. notion of an agent is the so-called softbot (‘software robot’). This is an agent that is active in a software environment (for instance the previously mentioned UNIX operating system). 2.2.2 The strong(er) notion of the concept "agent" For some researchers - particularly those working in the field of AI - the term agent has a stronger and more specific meaning than that sketched out in the previous section. These researchers generally mean an agent to be a computer system that, in addition to having the properties as they were previously identified, is either conceptualised or implemented using concepts that are more usually applied to humans. For example, it is quite common in AI to characterise an agent using mentalistic notions, such as knowledge, belief, intention, and obligation 14 . Some AI researchers have gone further, and considered emotional agents 15 . Another way of giving agents human-like attributes is to represent them visually by using techniques such as a cartoon-like graphical icon or an animated face 16 . Research into this matter 17 has shown that, although agents are pieces of software code, people like to deal with them as if they were dealing with other people (regardless of the type of agent interface that is being used). Agents that fit the stronger notion of agent usually have one or more of the following characteristics 18 : 12*mobility: the ability of an agent to move around an electronic network; 19 13*benevolence: is the assumption that agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it; 20 14*rationality: is (crudely) the assumption that an agent will act in order to achieve its goals and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit; 21 13 UNIX is an operating system that is mostly used on large computer systems and workstations. The concept of process is the basic idea behind UNIX (a program running under UNIX consists of one or more independent processes which usually are operating in parallel). 14 See: Shoham, Y. Agent-oriented programming. Artificial Intelligence, 60(1): page 51-92, 1993. 15 See, for instance, Bates, J. The role of emotion in believable agents. Communications of the ACM, 37(7): page 122-125, 1994. 16 See: Maes, P. Agents that reduce work and information overload. Communications of the ACM, 37(7): page 31-40, 1994. 17 See, for instance, Norman, D. How Might People Interact with Agents. In Communications of the ACM, 1994 issue, Juli 1994. 18 This list is far from complete. There are many other characteristics of agents that could have been added to this list. The characteristics that are mentioned here are there for illustrative purposes and should not be interpreted as an ultimate enumeration. 19 See: White, J. E. Telescript technology: The foundation for the electronic marketplace. White paper, General Magic Inc., 1994. 15*adaptivity: an agent should be able to adjust itself to the habits, working methods and preferences of its user; 16* collaboration: an agent should not unthinkingly accept (and execute) instructions, but should take into account that the human user makes mistakes (e.g. give an order that contains conflicting goals), omits important information and/or provides ambiguous information. For instance, an agent should check things by asking questions to the user, or use a built-up user model to solve problems like these. An agent should even be allowed to refuse to execute certain tasks, because (for instance) they would put an unacceptable high load on the network resources or because it would cause damage to other users. 22 Although no single agent possesses all these abilities, there are several prototype agents that posses quite a lot of them (see section 3.2.2 for some examples). At this moment no consensus has yet been reached about the relative importance (weight) of each of these characteristics in the agent as a whole. What most scientists have come to a consensus about, is that it are these kinds of characteristics that distinguish agents from ordinary programs. 2.2.3 "Agency" and "Intelligence" The degree of autonomy and authority vested in the agent, is called its agency. It can be measured at least qualitatively by the nature of the interaction between the agent and other entities in the system in which it operates. At a minimum, an agent must run a-synchronously. The degree of agency is enhanced if an agent represents a user in some way. This is one of the key values of agents. A more advanced agent can interact with other entities such as data, applications, or services. Further advanced agents collaborate and negotiate with other agents. What exactly makes an agent "intelligent" is something that is hard to define. It has been the subject of many discussions in the field of Artificial Intelligence, and a clear answer has yet to be found. Yet, a workable definition of what makes an agent intelligent is given in [IBM95]: "Intelligence is the degree of reasoning and learned behaviour: the agent's ability to accept the user's statement of goals and carry out the task delegated to it. At a minimum, there can be some statement of preferences, perhaps in the form of rules, with an inference engine or some other reasoning mechanism to act on these preferences. Higher levels of intelligence include a user model or some other form of understanding and reasoning about what a user wants done, and planning the means to achieve this goal. 20 See: Rosenschein, J. S. and Genesereth, M. R. Deals among rational agents. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (IJCAI-85), page 91-99, Los Angeles, United States, 1994. 21 See: Galliers, J. R. A Theoretical Framework for Computer Models of Cooperative Dialogue, Acknowledging Multi-Agent Conflict. PhD thesis, page 49-54, Open University, Great Britain, 1994. 22 See: Eichmann, D. Ethical Web Agents. Proceedings of the Second International World-Wide Web Conference. Chicago, United States, October 1994. Further out on the intelligence scale are systems that learn and adapt to their environment, both in terms of the user's objectives, and in terms of the resources available to the agent. Such a system might, like a human assistant, discover new relationships, connections, or concepts independently from the human user, and exploit these in anticipating and satisfying user needs." 2.3 The User's "definition" of agents "User knowledge, rather than product capability, is the principal determinant of agent-enabled application usage today. [ .] User need is the principal consideration in developing/executing business strategies for agent-enabled products." from [JANC95] Just like in the oncoming information society, the success and development of agents and the agent technique are driven by users really, instead of by producers or researchers. 23 So, when considering just exactly what an agent is, and which aspects of it are very important and which are less important, the ever important user factor should not be overlooked. Users will not start to use agents because of their benevolence, proactivity or adaptivity, but because they like the way agents help and support them in doing all kinds of tasks; soon users will use all sorts of convenient (i.e. "intelligent) applications, without them realising they are using agents by doing so. As was pointed out at the beginning of this chapter, there is one good reason why a fairly concise definition of an agent that can meet with general approval, should be drawn up as soon as possible: clarity towards the user. By all means it should be prevented that "agent" becomes a vague, meaningless and empty term, in the way a term such as "multi-media" has lost its meaning in the course of time. Agents will be perceived as nothing but the latest marketing hype: "Just take your old program, and add an agent to the end of your product name. Voila! You have an Object Agent, Test Agent. [ .]" quote taken from [JANC95] More about (professional) user's views on agents, will follow in chapter five and six. 2.4 Summary Today, agents are a popular research object in many scientific fields. An exact definition and exact set of characteristics (and their relative weight) are yet to be stated and chosen. 23 Users will not play that much of a very active steering-role, but user acceptance and adoption will be the ultimate test of agent's success. Ultimately, users of agents and agent-enabled programs will be the principal determinant of how agents will look, what they will be, and what things they should and should not be able to do. . 2 Intelligent Software Agents Theory 2.1 Introduction Intelligent software agents are a popular research object these. called agent and the market for agents will be very small or even not exist." Wijnand van de Calseyde on the Software Agents Mailing List On the other

Ngày đăng: 05/10/2013, 13:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan