Artificial Intelligence - unit 5

 

Unit V

Game Playing:

What is Game?

    The term Game means a sort of conflict in which n individuals orgroups (known as players) participate.

   Game theory denotes games of strategy.

    John von Neumann is acknowledged as father of game theory. Neumanndefined Game theory in 1928 and 1937 and established the mathematicalframework for all subsequent theoretical developments.

      Game theory allows decision-makers (players) to cope with otherdecision-makers (players) who have different purposes in mind. Inother words, players determine their own strategies in terms of the strategies and goals of their opponent.

   Games are integral attribute of human beings. Games engage the intellectual faculties of humans.

   If computers are to mimic people they should be able to play games.

Overview:

Besides the topic of attraction to the people, has close relation to "intelligence", and its well-defined states and rules. The most commonly used AI technique in game is "Search".

A "Two-person zero-sum game" is most studied game where the two players have exactly opposite goals. Besides there are "Perfect information games"(such as chess and Go) and "Imperfect information games" (such as bridge and games where a dice is used).

Given sufficient time and space, usually an optimum solution can be obtained for the former by exhaustive search, though not for the latter. However, for many interesting games, such a solution is usually too inefficient to be practically used.

Applications of game theory are wide-ranging. Von Neumann and Morgenstern indicated the utility of game theory by linking with economic behavior.

     Economic models : For markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, seasonal and cyclical variations, analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services.

      Social sciences : The n-person game theory has interesting uses in studying the distribution of power in legislative procedures, problems of majority rule,individual and group decision making.

    Epidemiologists : Make use of game theory, with respect to immunization procedures and methods of testing a vaccine or other medication.





 

    Military strategists : Turn to game theory to study conflicts of interest resolved through "battles" where the outcome or payoff of a war game is either victory or defeat.

Solitaire is not considered a game by game theory.

The term 'solitaire' is used for single-player games of concentration.

     An instance of a game begins with a player choosing from a set of specified (game rules) alternatives. This choice is called a move.

    After first move, the new situation determines which player to make next move and alternatives available to that player.

   In many board games, the next move is by other player.

    In many multi-player card games, the player making next move depends on who dealt, who took last trick, won last hand, etc.

    The moves made by a player may or may not be known to other players. Games in which all moves of all players are known to everyone are called games of perfect information.

   Most board games are games of perfect information.

   Most card games are not games of perfect information.

   Every instance of the game must end.

   When an instance of a game ends, each player receives a payoff.

A payoff is a value associated with each player's final situation. A zero-sum game is one in which elements of payoff matrix sum to zero.

In a typical zero-sum game :

   win = 1 point,

   draw = 0 points, and

   loss = -1 points.

 

 

Overview Theory

Theory does not prescribe a way or say how to play a game. Game theory is a set of ideas and techniques for analyzing conflict situations between two or more parties. The outcomes are determined by their decisions.

General game theorem : In every two player, zero sum, non-random, perfect knowledge game, there exists a perfect strategy guaranteed to at least result in a tie game.

The frequently used terms :

    The term "game" means a sort of conflict in which n individuals or groups (known as players) participate.

   A list of "rules" stipulates the conditions under which the game begins.


 

     A game is said to have "perfect information" if all moves are known to each of the players involved.

    A "strategy" is a list of the optimal choices for each player at every stage of a given

game.

    A "move" is the way in which game progresses from one stage to another, beginning

with an initial state of the game to the final state.

   The total number of moves constitute the entirety of the game.

   The payoff or outcome, refers to what happens at the end of a game.

   Minimax - The least good of all good outcomes.

   Maximin - The least bad of all bad outcomes.

The primary game theory is the Mini-Max Theorem. This theorem says :

"If a Minimax of one player corresponds to a Maximin of the other player, then that outcome is the best both players can hope for Game Playing”

   Games can be Deterministic or non-deterministic.

   Games can have perfect information or imperfect information.

Games Deterministic Non- Deterministic Perfect information Chess, Checkers, Go, Othello, Tic-tac-toe

Backgammon, Monopoly Imperfect information Navigating a maze Bridge, Poker, Scrabble

Relevance Game Theory and Game Plying:

How relevant the Game theory is to Mathematics, Computer science and Economics is shown in the Fig below.

Glossary of terms in the context of Game Theory

ð Game

Denotes games of strategy. It allows decision-makers (players) to cope with other decision-makers (players) who have different purposes in mind. In other words, players determine their own strategies in terms of the strategies and goals of their opponent.

ð Player

Could be one person, two persons or a group of people who share identical interests with respect to the game.

ð Strategy

A player's strategy in a game is a complete plan of action for whatever situation might arise. It is the complete description of how one will behave under every possible circumstance.

 





 

You need to analyze the game mathematically and create a table with "outcomes" listed for each strategy.

A two player strategy table

Players Strategies Player A Strategy 1 Player A Strategy 2 Player A Strategy 3 etc

Player B Strategy 1 Tie A wins B wins . . . Player B Strategy 2 B wins Tie A wins . . .

Player B Strategy 3 A wins B wins Tie . . . etc . . . . . . . . . . . .

The MiniMax Search Procedure:

 

A game can be thought of as a tree of possible future game states. For example, in Gomoku the game state is the arrangement of the board, plus information about whose move it is. The current state of the game is the root of the tree (drawn at the top). In general this node has several children, representing all of the possible moves that we could make.

Each of those nodes has children representing the game state after each of the opponent's moves. These nodes have children corresponding to the possible second moves of the current player, and so on.

The leaves of the tree are final states of the game: states where no further move can be made because one player has won, or perhaps the game is a tie. Actually, in general the tree is a graph, because there may be more than one way to get to a particular state. In some games (e.g., checkers) it is even possible to revisit a prior game state.

 

Minimax search

 

Suppose that we assign a value of positive infinity to a leaf state in which we win, negative infinity to states in which the opponent wins, and zero to tie states. We define a function evaluate that can be applied to a leaf state to determine which of these values is correct.

 

If we can traverse the entire game tree, we can figure out whether the game is a win for the current player assuming perfect play: we assign a value to the current game state by we recursively walking the tree. At leaf nodes we return the appropriate values. At nodes where we get to move, we take the max of the child values because we want to pick the best move; at nodes where the opponent moves we take the min of child values. This gives us the following pseudo-code procedure for minimax evaluation of a game tree.


 

fun minimax(n: node): int =

if leaf(n) then return evaluate(n) if n is a max node

v := L

for each child of n

v' := minimax (child) if v' > v, v:= v'

return v

if n is a min node v := W

for each child of n

v' := minimax (child) if v' < v, v:= v'

return v

 

Consider the following game tree, where the leaves are annotated with W or L to indicate a winning or losing position for the current player (L < W), and interior nodes are labeled with + or - to indicate whether they are "max" nodes where we move or "min" nodes where the opponent moves. In this game tree, the position at the root of the tree is a losing position because the opponent can force the game to proceed to an "L" node:

 

+

/ \

/   \

-      -

/ \   /|\

+   L L W W

/ \

W   L

We can see this by doing a minimax evaluation of all the nodes in the tree. Each node is labeled with its minimax value in red:

 

 

 

 

 

 

 





 

L

/ \

/  \

L     L

/ \  /|\

W L L W W

/ \ W L

 

Static evaluation

 

Usually expanding the entire game tree is infeasible because there are so many possible states. The solution is to only search the tree to a specified depth. The evaluate function (the static evaluator) is extended so it returns a value between L and W for game positions that are not final positions. For game positions that look better for the current player, it returns larger numbers. When the depth limit of the search is exceeded, the static evaluator is applied to the node as if it were a leaf:

 

(* the minimax value of n, searched to depth d *) fun minimax(n: node, d: int): int =

if leaf(n) or depth=0 return evaluate(n) if n is a max node

v := L

for each child of n

v' := minimax (child,d-1) if v' > v, v:= v'

return v

if n is a min node v := W

for each child of n

v' := minimax (child,d-1) if v' < v, v:= v'

return v

 

For example, consider the following game tree searched to depth 3, where the static evaluator is applied to a number of nodes that are not leaves in the game tree:


 

6

/ \

/ \

6      L

/ \   |\

6 8 L W

/|\ |\ |\

1 5 6 8 3 1 W

 

The value of the root of the tree is 6 because the current player can force the game to go to a "leaf" node (as defined by the depth cutoff) whose value is at least 6. Notice that by finding out the value of the current position, the player also learns what is the best move to make: the move that transitions the game to the immediate child with maximum value.

 

Designing the static evaluator is an art: a good static evaluator should be very fast, because it is the limiting factor in how quickly the search algorithm runs. But it also needs to capture a reasonable approximation of how good the current board position is, because it captures what the player is trying to achieve during play. In practice, game AI designers have found that it doesn't pay to build intelligence into the static evaluator when the same information can be obtained by searching a level or two deeper in the game tree.

 

How deeply should the tree be searched? This depends on how much time is available to do the search. Each increase in depth multiplies the total search time by about the number of moves available at each level.

 

Alpha-Beta Pruning

 

The full minimax search explores some parts of the tree it doesn't have to. For example, consider the tree above and suppose that our search is proceeding in left-to-right order.

6

/ \

/

6 (a)

/ \

6 8 (b)

/|\ |\

1 5 6 8 ...

 

Once we have seen the node whose static evaluation is 8, we know that there is no point to exploring any of the rest of the children of the max node above it. Those children could only increase the value of the max node (b) above, but the min node above that (a) is going to have





 

value at most 6 anyway. No matter what happens in the part of the tree under the ..., it can't affect the minimax value of the min node labeled 6. Avoiding searching a part of a tree is called pruning; this is an example of alpha-beta pruning.

 

In general the minimax value of a node is going to be worth computing only if it lies within a particular range of values. We can capture this by extending the code of the minimax function with a pair of arguments min and max. The new spec of minimax is that it always returns a value in the range [min, max]. For example, when evaluating the node (b) above, we can set max to 6 because there is no reason to find out about values greater than 6. There are corresponding cases where there is no reason to find out about values less than some minimum value. The min and maxbounds are used to  prune  away subtrees  by terminating a call to search early. Once a child node has been seen that pushes the node's value outside the range of interest, there is no point in exploring the rest of the children. This idea is captured by adding the tests

if v > max return max and if v < min return min in the following code: (* the minimax value of n, searched to depth d.

*   If the value is less than min, returns min.

*   If greater than max, returns max. *)

fun minimax(n: node, d: int, min: int, max: int): int = if leaf(n) or depth=0 return evaluate(n)

if n is a max node v := min

for each child of n

v' := minimax (child,d-1,...,...) if v' > v, v:= v'

if v > max return max return v

if n is a min node v := max

for each child of n

v' := minimax (child,d-1,...,...) if v' < v, v:= v'

if v < min return min return v

 

Because we don't care about values less than min or greater than max, we also initialize the variable v to min in the max case and max in the min case, rather than to L and W. Notice that  if  this  procedure  is  invoked  as minimax(n,d,L,W),  it  will  behave  just  like the minimax procedure without min and max bounds, assuming that the static evaluator only


 

returns values between Land W. Thus, a top-level search is invoked in this way so that we get the same answer as before pruning.

 

The only thing missing from our search  algorithm  now  is  to  compute  the right min and max values  to  pass  down.  Clearly  we  could  safely  pass  down  the same min and max received in the call, but then we wouldn't have achieved anything. Consider the max node case after we have gone around the loop. In general the variable v will be greater than min. In the recursive invocation ofminimax there is no point to finding out about values less or equal to v; they can't possibly affect the value of v that is returned. Therefore, instead of passing min down in the recursive call, we pass v itself. Conversely, in the min node case, we pass v in place of max:

 

(* the minimax value of n, searched to depth d.

*   If the value is less than min, returns min.

*   If greater than max, returns max. *)

fun minimax(n: node, d: int, min: int, max: int): int = if leaf(n) or depth=0 return evaluate(n)

if n is a max node v := min

for each child of n

v' := minimax (child,d-1,v,max) if v' > v, v:= v'

if v > max return max return v

if n is a min node v := max

for each child of n

v' := minimax (child,d-1,min,v) if v' < v, v:= v'

if v < min return min return v

 

This is pseudo-code for minimax search with alpha-beta pruning, or simply alpha-beta search. We can verify that it works as intended by checking what it does on the example tree above. Each node is shown with the [min,max] range that minimax is invoked with. Pruned parts of the tree are marked with X.

 

 

 

 

 

 

 





 

6[L,W]

/       \

/         \

/           \

6[L,W]            L[6,W]

/       \           |    \

/         \          |     X 6[L,W]          6[L,6]                 L[6,W]

/  |  \        | \

/     |    \         |    X

/     |    \       |

1[L,W] 5[1,W] 6[5,W] 8[L,6]

 

In general the [min,max] bounds become tighter and tighter as you proceed down the tree from the root.

 

Making pruning effective:

 

How effective is alpha-beta pruning? It depends on the order in which children are visited. If children of a node are visited in the worst possible order, it may be that no pruning occurs. For max nodes, we want to visit the best child first so that time is not wasted in the rest of the children exploring worse scenarios. For min nodes, we want to visit the worst child first (from our perspective, not the opponent's.) There are two obvious sources of this information:

 

1.                    The static evaluator function can be used to rank the child nodes

2.                    Previous searches of the game tree (for example, from previous moves) performed minimax evaluations of many game positions. If available, these values may be used to rank the nodes.

 

When the optimal child is selected at every opportunity, alpha-beta pruning causes all the rest of the children to be pruned away at every other level of the tree; only that one child is explored. This means that on average the tree can searched twice as deeply as before—a huge increase in searching performance.

 

Expert Systems:

Expert systems (ES)     are one of the prominent research domains of AI. It is introduced by the researchers at Stanford University, Computer Science Department.

What are Expert Systems?

The expert systems are the computer applications developed to solve complex problems in a particular domain, at the level of extra-ordinary human intelligence and expertise.


 

Characteristics of Expert Systems

 

·                     High performance

·                     Expert SystemUnderstandable

·                     Reliable

·                     Highly responsive

Capabilities of Expert Systems The expert systems are capable of

 

·                     Advising

·                     Instructing and assisting human in decision making

·                     Demonstrating

·                     Deriving a solution

·                     Diagnosing

·                     Explaining

·                     Interpreting input

·                     Predicting results

·                     Justifying the conclusion

·                     Suggesting alternative options to a problem

They are incapable of

 

·                     Substituting human decision makers

·                     Possessing human capabilities

·                     Producing accurate output for inadequate knowledge base

·                     Refining their own knowledge

 

Components of Expert Systems

The components of ES include

 

·                     Knowledge Base

·                     Inference Engine

·                     User Interface

Let us see them one by one briefly

Knowledge Base

It contains domain-specific and high-quality knowledge. Knowledge is required to exhibit intelligence. The success of any ES majorly depends upon the collection of highly accurate and precise knowledge.

 





 

What is Knowledge?

The data is collection of facts. The information is organized as data and facts about the task domain. Data, information, and past experience combined together are termed as knowledge.

Components of Knowledge Base

The knowledge base of an ES is a store of both, factual and heuristic knowledge.

·                     Factual Knowledge It is the information widely accepted by the Knowledge Engineers and scholars in the task domain.

·                     Heuristic Knowledge It is about practice, accurate judgement, one’s ability of evaluation, and guessing.

· 

Knowledge representation

It is the method used to organize and formalize the knowledge in the knowledge base. It is in the form of IF-THEN-ELSE rules.

Knowledge Acquisition

The success of any expert system majorly depends on the quality, completeness, and accuracy of the information stored in the knowledge base.

The knowledge base is formed by readings from various experts, scholars, and the Knowledge Engineers. The knowledge engineer is a person with the qualities of empathy, quick learning, and case analyzing skills.

He acquires information from subject expert by recording, interviewing, and observing him at work, etc. He then categorizes and organizes the information in a meaningful way, in the form of IF-THEN-ELSE rules, to be used by interference machine. The knowledge engineer also monitors the development of the ES.

Knowledge acquisition is the process used to define the rules and ontologies required for a knowledge-based system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with developing an expert system, namely finding and interviewing domain experts and capturing their knowledge via rules, objects, and frame- based ontologies.

Expert systems were one of the first successful applications of artificial intelligence technology to real world business problems. Researchers at Stanford and other AI laboratories worked with doctors and other highly skilled experts to develop systems that could automate complex tasks such as medical diagnosis. Until this point computers had mostly been used to automate highly data intensive tasks but not for complex reasoning. Technologies such as inference engines allowed developers for the first time to tackle more complex problems.

As expert systems scaled up from demonstration prototypes to industrial strength applications it was soon realized that the acquisition of domain expert knowledge was one of if


 

not the most critical task in the knowledge engineering process. This knowledge acquisition process became an intense area of research on its own. One of the earlier works on the topic used Batesonian theories of learning to guide the process.

One approach to knowledge acquisition investigated was to use natural language parsing and generation to facilitate knowledge acquisition. Natural language parsing could be performed on manuals and other expert documents and an initial first pass at the rules and objects could be developed automatically. Text generation was also extremely useful in generating explanations for system behavior. This greatly facilitated the development and maintenance of expert systems.

A more recent approach to knowledge acquisition is a re-use based approach. Knowledge can be developed in ontologies that conform to standards such as the Web Ontology Language (OWL). In this way knowledge can be standardized and shared across a broad community of knowledge workers. One  example  domain  where  this  approach  has  been  successful is bioinformatics.

Inference Engine

Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct, flawless solution.

In case of knowledge-based ES, the Inference Engine acquires and manipulates the knowledge from the knowledge base to arrive at a particular solution.

In case of rule based ES, it

·                     Applies rules repeatedly to the facts, which are obtained from earlier rule application.

·                     Adds new knowledge into the knowledge base if required.

·                     Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies

 

·                     Forward Chaining

·                     Backward Chaining

Forward Chaining:

It is a strategy of an expert system to answer the question, “What can happen next?”

Here, the Inference Engine follows the chain of conditions and derivations and finally deduces the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.

This strategy is followed for working on conclusion, result, or effect. For example, prediction of share market status as an effect of changes in interest rates.

 

 





 

Backward Chaining

With this strategy, an expert system finds out the answer to the question,Why this happened?”

On the basis of what has already happened, the Inference Engine tries to find out which conditions could have happened in the past for this result. This strategy is followed for finding out cause or reason. For example, diagnosis of blood cancer in humans.

User Interface

User interface provides interaction between user of the ES and the ES itself. It is generally Natural Language Processing so as to be used by the user who is well-versed in the task domain. The user of the ES need not be necessarily an expert in Artificial Intelligence.

It explains how the ES has arrived at a particular recommendation. The explanation may appear in the following forms

 

·                     Natural language displayed on screen.

·                     Verbal narrations in natural language.

·                     Listing of rule numbers displayed on the screen.

The user interface makes it easy to trace the credibility of the deductions.

 

 

Requirements of Efficient ES User Interface

·                     It should help users to accomplish their goals in shortest possible way.

·                     It should be designed to work for user’s existing or desired work practices.

·                     Its technology should be adaptable to user’s requirements; not the other way round.

·                     It should make efficient use of user input.

 

 

Expert Systems Limitations

No technology can offer easy and complete solution. Large systems are costly; require significant development time, and computer resources. ESs has their limitations which include

 

·                     Limitations of the technology

·                     Difficult knowledge acquisition

·                     ES are difficult to maintain

·                     High development costs


 

 

 

Applications of Expert System

The following table shows where ES can be applied.

 

Application

Description

Design Domain

Camera lens design, automobile design.

 

Medical Domain

Diagnosis Systems to deduce cause of disease from observed data, conduction medical operations on humans.

 

Monitoring Systems

Comparing data continuously with observed system or with prescribed behavior such as leakage monitoring in long petroleum pipeline.

Process                        Control Systems

 

Controlling a physical process based on monitoring.

Knowledge Domain

Finding out faults in vehicles, computers.

 

Finance/Commerce

Detection of possible fraud, suspicious transactions, stock market trading, Airline scheduling, cargo scheduling.

 

Expert System Technology

There are several levels of ES technologies available. Expert systems technologies include ,

·         Expert System Development Environment The ES development environment includes hardware and tools. They are

ü  Workstations, minicomputers, mainframes.

ü  High level Symbolic Programming Languages such as LIStProgramming (LISP) and PROgrammation en LOGique (PROLOG).

ü  Large databases.

·         Tools they reduce the effort and cost involved in developing an expert system to large extent.

ü  P owerful editors and debugging tools with multi-windows.

ü  They provide rapid prototyping

 

 





 

ü  Have Inbuilt definitions of model, knowledge representation, and inference design.

·         Shells − A shell is nothing but an expert system without knowledge base. A shell provides the developers with knowledge acquisition, inference engine, user interface, and explanation facility. For example, few shells are given below −

ü    Java Expert System Shell (JESS) that provides fully developed Java API for creating an expert system.

ü    Vidwan, a shell developed at the National Centre for Software Technology, Mumbai in 1993. It enables knowledge encoding in the form of IF-THEN rules.

Development of Expert Systems: General Steps

The process of ES development is iterative. Steps in developing the ES include ,

Identify Problem Domain

 

·                     The problem must be suitable for an expert system to solve it.

·                     Find the experts in task domain for the ES project.

·                     Establish cost-effectiveness of the system.

Design the System

·                     Identify the ES Technology

·                     Know and establish the degree of integration with the other systems and databases.

·                     Realize how the concepts can represent the domain knowledge best.

Develop the Prototype From Knowledge Base: The knowledge engineer works to

 

·                     Acquire domain knowledge from the expert.

·                     Represent it in the form of If-THEN-ELSE rules. Test and Refine the Prototype

·                     The knowledge engineer uses sample cases to test the prototype for any deficiencies in performance.

·                     End users test the prototypes of the ES.

Develop and Complete the ES

·                     Test and ensure the interaction of the ES with all elements of its environment, including end users, databases, and other information systems.

·                     Document the ES project well.

·                     Train the user to use ES.

Maintain the ES

·                     Keep the knowledge base up-to-date by regular review and update.


 

·                     Cater for new interfaces with other information systems, as those systems evolve.

Benefits of Expert Systems

·                     Availability They are easily available due to mass production of software.

·                     Less Production Cost Production cost is reasonable. This makes them affordable.

·                     Speed They offer great speed. They reduce the amount of work an individual


puts in.


·                     Less Error Rate Error rate is low as compared to human errors.

·                     Reducing Risk They can work in the environment dangerous to humans.

·                     Steady response They work steadily without getting motional, tensed or


fatigued.

Explanation :

*    The 1st line is the statement "Socrates is a man."

*    The 2nd line is a phrase "all human are mortal"into the equivalent "for all X, if X is a man then X is mortal".

*    The 3rd line is added to the set to determine the mortality of Socrates.

*     The 4th line is the deduction from lines 2 and 3. It is justified by the inference rule modus tollens which states that if the conclusion of a rule is known to be false, then so is the hypothesis.

*    Variables X and Y are unified because they have same value.

*     By unification, Lines 5, 4b, and 1 produce contradictions and identify Socrates as

mortal.

*     Note that, resolution is an inference rule which looks for a contradiction and it is

facilitated by unification which determines if there is a substitution which makes two terms the same.

Logic model formalizes the reasoning process. It is related to relational data bases and expert systems.

 

Example Rule 1

R1 : KR forward-backward reasoning ,

IF hot AND smoky THEN fire

Rule R2 : IF alarm_beeps THEN smoky

 

 





 

Rule R3 : IF fire THEN switch_on_sprinklers Fact F1 : alarm_beeps [Given]

Fact F2 : hot [Given]

 

 

  Example 2

Rule R1 : IF hot AND smoky THEN ADD fire Rule R2 : IF alarm_beeps THEN ADD smoky

Rule R3 : IF fire THEN ADD switch_on_sprinklers Fact F1 : alarm_beeps [Given]

Fact F2 : hot [Given]

Example Rule KR forward-backward reasoning

 

 

3 : A typical Forward Chaining,

R1 : IF hot AND smoky THEN ADD fire Rule R2 : IF alarm_beeps THEN ADD smoky

Rule R3 : If fire THEN ADD switch_on_sprinklers Fact F1 : alarm_beeps [Given]

Fact F2 : hot [Given]

Fact F4 : smoky [from F1 by R2] Fact F2 : fire [from F2, F4 by R1]

Fact F6 : switch_on_sprinklers [from F2 by R3]

 

 

  Example 4 : A typical Backward Chaining Rule R1 : IF hot AND smoky THEN fire Rule R2 : IF alarm_beeps THEN smoky

Rule R3 : If _fire THEN switch_on_sprinklers Fact F1 : hot [Given]

Fact F2 : alarm_beeps [Given]

Goal : Should I switch sprinklers on?

Chaining Forward chaining system, KR forward chaining ,properties , algorithms, and conflict

resolution strategy are illustrated.


 

·                     Forward chaining system

ð    facts are held in a working memory

ð    condition-action rules represent actions to be taken when specified facts occur in working memory.

ð    typically, actions involve adding or deleting facts from the working memory.

·                     Properties of Forward Chaining

ð    all rules which can fire do fire.

ð    can be inefficient - lead to spurious rules firing, unfocused problem solving

ð    set of rules that can fire known as conflict set.

ð    decision about which rule to fire is conflict resolution.

KR forward chaining

·                     Forward chaining algorithm - I

Repeat

ð    Collect the rule whose condition matches a fact in WM.

ð    Do actions indicated by the rule.(add facts to WM or delete facts from WM) Until problem is solved or no condition match

Apply on the Example 2 extended (adding 2 more rules and 1 fact) Rule R1 : IF hot AND smoky THEN ADD fire

Rule R2 : IF alarm_beeps THEN ADD smoky Rule R3 : If fire THEN ADD switch_on_sprinklers

Rule R4 : IF dry THEN ADD switch_on_humidifier Rule R5 : IF sprinklers_on THEN DELETE dry Fact F1 : alarm_beeps [Given]

Fact F2 : hot [Given] Fact F2 : Dry [Given]

 

Now, two rules can fire (R2 and R4)

Rule R4 ADD humidifier is on [from F2] ADD smoky [from F1]

ADD fire [from F2 by R1]

ADD switch_on_sprinklers [by R3]

Rule R2[followed by sequence of actions]

DELEATE dry, ie humidifier is off a conflict ![by R5 ]

 





 

 

 

  Forward chaining algorithm - II (applied to example 2 above )

Repeat

ð    Collect the rules whose conditions match facts in WM.

ð    If more than one rule matches as stated above then

Use conflict resolution strategy to eliminate all but one

Do actions indicated by the rules (add facts to WM or delete facts from WM) Until problem is solved or no condition match

 

 

Conflict Conflict Resolution set is Strategy

the set of rules that have KR – forward chaining ,their conditions satisfied by working memory elements. Conflict resolution normally selects a single rule to fire. The popular conflict resolution mechanisms are :Refractory, Recency, Specificity.

Ø    Refractory: It is a rule should not be allowed to fire more than once on the samedata.

It discard executed rules from the conflict set. It prevents undesired loops.

Ø    Recency: It rank instantiations in terms of the recency of the elements in the premise of the rule. The rules which use more recent data are preferred. It is working memory elements are time-tagged indicating at what cycle each fact was added to working memory.

Ø    Specificity: The rules which have a greater number of conditions and are therefore more difficult to satisfy, are preferred to more general rules with fewer conditions. There are more specific rules are ‘better’ because they take more of the data into account.

Ø    Alternative Instead KR – forward chaining to Conflict Resolution – Use Meta Knowledge,of conflict resolution strategies, sometimes we want to useknowledge in deciding which rules to fire. Meta-rules reason aboutwhich rules should be considered for firing. They direct reasoning ratherthan actually performing reasoning.

Meta-knowledge : knowledge about knowledge to guide search.

Example of meta-knowledgeIF conflict set contains any rule (c , a) such that a = "animal is mammal'' THEN fire (c , a) This example says meta-knowledge encodes knowledge about how to guide search for solution. Meta-knowledge, explicitly coded in the form of rules with "object level" knowledge.


 

KR backward chaining Chaining

Chaining system and the algorithm are illustrated.

  Backward chaining system

ð    Backward chaining means reasoning from goals back to facts.The idea is to focus on the search.

ð    Rules and facts are processed using backward chaining interpreter.

ð    Checks hypothesis, e.g. "should I switch the sprinklers on?"

  Backward chaining algorithm

ð    Prove goal G If G is in the initial facts , it is proven. Otherwise, find a rule which can be used to conclude G, and try to prove each of that rule's conditions.

Encoding of rules

Rule R1 : IF hot AND smoky THEN fire Rule R2 : IF alarm_beeps THEN smoky Rule R3 : If fire THEN switch_on_sprinklers Fact F1 : hot [Given]

Fact F2 : alarm_beeps [Given]

Goal : Should I switch sprinklers on?

 

 

Depends vs Backward on Chaining

problem, and KR backward chaining ,on properties of rule set.

ð Backward chaining is likely to be better if there is clear hypotheses.Examples : Diagnostic problems or classification problems, Medical expert systems

ð   Forward chaining may be better if there is less clear hypothesis and want to see what can be concluded from current situation;

Examples : Synthesis systems - design / configuration.

Knowledge algorithm consists KR – control knowledge, of: logic component, that specifies the knowledge to be used in solving problems, and control component, that determines the problem-solving strategies by means of which that knowledge is used.

Thus Algorithm = Logic + Control .

The logic component determines the meaning of the algorithm whereas the control component only affects its efficiency. An algorithm may be formulated in different ways, producing same behavior. One formulation, may have a clear statement in logic component but employ a sophisticated problem solving strategy in the control component.

 

 





 

The other formulation, may have a complicated logic component but employ a simple problem-solving strategy. The efficiency of an algorithm can often be improved by improving the control component without changing the logic of the algorithm and therefore without changing the meaning of the algorithm. The trend in databases is towards the separation of logic and control.

The programming languages today do not distinguish between them. The programmer specifies both logic and control in a single language. The execution mechanism exercises only the most rudimentary problem-solving capabilities.

Computer programs will be more often correct, more easily improved, and more readily adapted to new problems when programming languages separate logic and control, and when execution mechanisms provide more powerful problem-solving facilities of the kind provided by intelligent theorem-proving systems.

 

Perception and Action:

Perception:

The definition of AI is based on the nature of the problems it tackles, namely those for which humans currently outperform computers. Also , it includes cognitive tasks. A part from those two aspects, there are many other tasks(that also fall with in this realm) such as basic perceptual and motor skills in which even lower animals posses phenomenal capabilities compared to computers.

Perception involves interpreting sights, sounds, smells and touch. Action includes the ability to negative through the world and manipulate objects. If we want to build robots that live in the world, we must understand these processes. Figure 4.3 shows a design for a complete autonomous robot. Most of AI is concerned with only cognition, we will simply add sensors and effectors to them. But the problems in perception and action are substantial in their own right and are being tackled by researchers in the field of robotics.

In the past, robotics and AI have been largely independent endeavors, and they have developed different techniques to solve different problems. One key difference between AI programs and robots is that AI programs usually operate in computer-stimulated worlds, robots must operate in physical world. For example, in the case of moves in chess, an AI program can search millions of nodes in a game tree without ever having to sense or touch anything in the real world. A complete chess-playing robot, on the


 

other hand, must be capable of grasping pieces, visually interpreting board positions, and carrying on a host of other actions. The distinction between real and simulated worlds has several implications as given below:

A design for an Autonomous Robot:

1.   The input to an AI program is symbolic in form (example : a typed English sentence), whereas the input to a robot is typically an analog signal ,such as a two dimensional video image or a speech wave form.

2.    Robots require special hardware for perceiving and affecting the world, while AI programs require only general-purpose computers.

3.  Robot sensors are inaccurate, and their effectors are limited in precision.

4.   Many robots must react in real time. A robot fighter plane, for example, cannot afford to search optimally or o stop monitoring the world during a LISP garbage collection.

5.     The real world is unpredictable, dynamic, and uncertain. A root cannot hope to maintain a correct and complete description of the world. This means that a robot must consider the trade-off between devising and executing plans. This trade-off has several aspects. For one thing a robot may not possess enough information about the world for it to do any useful planning. In that case, it must first engage in information gathering activity. Furthermore, once it begins executing a plan, the robot must continually the results of its actions. If the results are unexpected, then re-planning may be necessary.

 

 

6.    Because robots must operate in the real world, searching and back tracking can be

costly.

 

 

Recent years have seen efforts to integrate research in robotics and AI. The old idea of simply sensors and effectors to existing AI programs has given way to a serious rethinking of basic AI algorithms in light of the problems involved in dealing with the physical world. Research in robotics is likewise affected by AI techniques, since reasoning about goals and plans is essential for mapping perceptions onto appropriate actions.

At this point one might ask whether physical robots are necessary for research purposes. Since current AI programs already operate in simulated worlds, why not build more realistic simulations, which better model the real world? Such simulators do exist. There are several advantages to using a simulated world: Experiment can be conducted very rapidly, conditions can easily be replicated, programs can return to previous states at no cost, and sensory input can be treated no fragile, expensive mechanical parts. The major drawback to simulators is figuring out exactly which factors to build in. experience with real robots continue4s to expose tough problems that do not arise even in the most sophisticated simulators. The world turns out – not surprisingly to be an excellent model of itself, and a readily available one.





 

We perceive our environment through many channels: sight, sound, touch, smell, taste. Many animals process these same perceptual capabilities, and others also able to monitor entirely different channels. Robots, too, can process visual and auditory information, and they can also equip with more exotic sensors. Such as laser rangefinders, speedometers and radar.

Two extremely important sensory channels for human are vision and spoken language. It is through these two faculties that we gather almost all of the knowledge that drives our problem- solving behaviors.

 

Vision:            Accurate machine vision opens up a new realm of computer applications. These applications include mobile robot navigation, complex manufacturing tasks analysis of satellite images, and medical image processing. The question is that how we can transform raw camera images into useful information about the world.

A Video Camera provides a computer with an image represented as a two-dimensional grid of intensity levels. Each grid element, or pixel, may store a single bit of information (that is , black/white) or many bits(perhaps a real-valued intensity measure and color information). A visual image is composed of thousands of pixels. What kinds of things might we want to do with such an image? Here are four operations, in order of increasing complexity:

1.   Signal Processing:- Enhancing the image, either for human consumption or as input to another program.

2.   Measurement Analysis:- For images containing a single object, determining the two- dimensional extent of the object depicted.

3.     Pattern Recognition:- For single object images, calssifying the object into a category drawn from a finite set of possibilities.

4.   image Understanding :- For images containing many objects, locating the object in the image, classifying them, and building a three-dimensional model of the scene.

There are algorithms that perform the first two operations. The third operation, pattern recognition varies in its difficulty. It is possible to classify two-dimensional (2-D) objects, such as machine parts coming down a conveyor belt, but classifying 3-D objects is harder because of the large number of possible orientations for each object. Image understanding is the most difficult visual task, and it has been the subject of the most study in AI. While some aspects of image understanding reduce to measurement analysis and pattern recognition, the entire problem remains unsolved , because of difficulties that include the following:

1.   An image is two-dimensional, while the world is three-dimensional some information is necessarily lost when an image is created.

2.  One image may contain several objects, and some objects may partially occlude others.

3.    The value of a single pixel is affected by many different phenomena, including the color of the object, the source of the light , the angale and distance of the camera, the pollution in the air, etc. it is hard to disentangle these effects.


 

As a result, 2-D images are highly ambiguous. Given a single image, we could construct any number of 3-D worlds that would give rise to the image . it is impossible to decide what 3-D solid it should portray. In order to determine the most likely interpretation of a scene , we have to apply several types of knowledge.

Speech Recognition:

Natural Language understanding systems usually accept typed input, but for a number of applications this is not acceptable. Spoken language is a more natural form of communication in many human-computer interfaces. Speech recognition systems have been available for some time, but their limitations have prevented widespread used . Below are five major design issues in speech systems. These issues also provide dimensions along which systems can be compared with one another.

1.     Speaker Dependence versus Speaker Independence : A speaker –independent system can liten to any speakear and translate the sounds into written text. Speaker independence ishard to achieve because of the wide variations in pitch and accent. It is easier to build a speaker –dependent system, which can be trained on the voice

 

Robotics:

Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and efficient robots.

What are Robots?

Robots are the artificial agents acting in real world environment.

What is Robotics?

Robotics is a branch of AI, which is composed of Electrical Engineering, Mechanical Engineering, and Computer Science for designing, construction, and application of robots.

Aspects of Robotics

·                     The robots have mechanical construction, form, or shape designed to accomplish a particular task.

·                     They have electrical components which power and control the machinery.

·                     They contain some level of computer program that determines what, when and how a robot does something.

 

 

 

 

 

 

 

 





 

Difference in Robot System and Other AI Program

Here is the difference between the two

 

AI Programs

Robots

They      usually      operate      in computer-stimulated worlds.

They operate in real physical world

The input to an AI program is in symbols and rules.

Inputs to robots is analog signal in the form of speech waveform or images

They    need    general                        purpose computers to operate on.

They need special hardware with sensors and effectors.

Robot Locomotion

Locomotion is the mechanism that makes a robot capable of moving in its environment.

There are various types of locomotions

·                     Legged

·                     Wheeled

·                     Combination of Legged and Wheeled Locomotion

·                     Tracked slip/skid

Legged LocomotionLegged Locomotion

·                     This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb up or down, etc.

·                     It requires more number of motors to accomplish a movement. It is suited for rough as well as smooth terrain where irregular or too smooth surface makes it consume more power for a wheeled locomotion. It is little difficult to implement because of stability issues.

·                     It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of the total legs) a robot can travel depends upon the number of its legs.

If a robot has k legs, then the number of possible events N = (2k-1)! . In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3! = 6.


 

Hence there are six possible different events

·                     Lifting the Left leg

·                     Releasing the Left leg

·                     Lifting the Right leg

·                     Releasing the Right leg

·                     Lifting both the legs together

·                     Releasing both the legs together

In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is directly proportional to the number of legs.

Wheeled Locomotion

Wheeled LocomotionIt requires fewer number of motors to accomplish a movement. It is little easy to implement as there are less stability issues in case of more number of wheels. It is power efficient as compared to legged locomotion.

·                     Standard wheel Rotates around the wheel axle and around the contact

·                     Castor wheel Rotates around the wheel axle and the offset steering joint.

·                     Swedish   45o and    Swedish   90o wheels     Omni-

wheel, rotates around the contact point, around the wheel axle, and around the rollers.

·                     Tracked RobotBall or spherical wheel Omnidirectional wheel, technically difficult to implement.

 

Slip/Skid Locomotion

In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with different speeds in the same or opposite direction. It offers stability because of large contact area of track and ground.

Components of a Robot

Robots are constructed with the following

·                     Power Supply The robots are powered by batteries, solar power, hydraulic, or pneumatic power sources.

·                     Actuators They convert energy into movement.

 

 





 


 

 

 

 

them.


·                     Electric motors (AC/DC) They are required for rotational movement.

·                     Pneumatic Air Muscles They contract almost 40% when air is sucked in them.

·                     Muscle Wires They contract by 5% when electric current is passed through

 

·                     Piezo Motors and Ultrasonic Motors Best for industrial robots.

·                     Sensors They provide     knowledge of real time information on the task


environment. Robots are equipped with vision sensors to be to compute the depth in the environment. A tactile sensor imitates the mechanical properties of touch receptors of human fingertips.

Computer Vision

This is a technology of AI with which the robots can see. The computer vision plays vital role in the domains of safety, security, health, access, and entertainment.

Computer vision automatically extracts, analyzes, and comprehends useful information from a single image or an array of images. This process involves development of algorithms to accomplish automatic visual comprehension.

Hardware of Computer Vision System This involves,

·                     Power supply

·                     Image acquisition device such as camera

·                     a processor

·                     a software

·                     A display device for monitoring the system

·                     Accessories such as camera stands, cables, and connectors

 

 

Tasks of Computer Vision

·                     OCR − In the domain of computers, Optical Character Reader, a software to convert scanned documents into editable text, which accompanies a scanner.

·                     Face Detection − Many state-of-the-art cameras come with this feature, which enables to read the face and take the picture of that perfect expression. It is used to let a user access the software on correct match.

·                     Object Recognition − They are installed in supermarkets, cameras, high-end cars such as BMW, GM, and Volvo.

·                     Estimating Position − It is estimating position of an object with respect to camera as in position of tumor in human’s body.


 

Application Domains of Computer Vision

·                     Agriculture

·                     Autonomous vehicles

·                     Biometrics

·                     Character recognition

·                     Forensics, security, and surveillance

·                     Industrial quality inspection

·                     Face recognition

·                     Gesture analysis

·                     Geo science

·                     Medical imagery

·                     Pollution monitoring

·                     Process control

·                     Remote sensing

·                     Robotics

·                     Transport

 

 

Applications of Robotics

The robotics has been instrumental in the various domains such as,

·                     Industries − Robots are used for handling material, cutting, welding, color coating, drilling, polishing, etc.

·                     Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot named Daksh, developed by Defense Research and Development Organization (DRDO), is in function to destroy life-threatening objects safely.

·                     Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously, rehabilitating permanently disabled people, and performing complex surgeries such as brain tumors.

·                     Exploration − The robot rock climbers used for space exploration, underwater drones used for ocean exploration are to name a few.

·                     Entertainment − Disney’s engineers have created hundreds of robots for movie making.

===================Unit V Completed====================




               Unit 5

     one mark questions

1.             LISP and PROLOG are  High level symbolic programming languages

2.              The popular conflict resolution mechanisms are

3.             ………………provides interaction between the user of ES and ES itself.

4.             Finding out faults in vehicles and computers is ………….

 

2 Mark questions

 

1.      What is called perception?

2.      What is an expert system?

3.      What is game?

4.      What is robotics?

 

15 mark questions

1.      Describe the characteristics of an Expert system.

2.      List out the components of an Expert system.

3.      What are the requirements of efficient ES User Interface?

4.      Describe Perception and Action of Artificial Intelligence.

5.      What are the Tasks of computer vision?

6.      List out the components of a Robot.

 

Comments

Popular posts from this blog

Computer Networks

Unit 1 - Introduction to Operating Sytem & Unit 2 - Process Characterization

UNIT I INTRODUCTION TO DEEP LEARNING