|
|
Created by Jonathan Zinger
over 11 years ago
|
|
| Question | Answer |
| What Is AI? | The study of human intelligence via computer models. – The study of man-made computational intelligence. – The study of the architecture of intelligent systems. – The study of “user-friendly” systems. |
| Traditional AI | Intelligence = ability to solve problems |
| Traditional AI | Goal Directed Symbol Manipulation |
| General Problem Solver Examples | Game playing • Theorem proving • Medical diagnosis • Traveling from A to B • Solving puzzles: – Sudoku, logic, crossword, … |
| General Problem Solving Process | Given: – Symbol set, operators – Initial state, goal state • Find: – Sequence of operations that transform the initial state into the goal state |
| General Problem Solving Strategy | Develop knowledge-intensive techniques for efficiently constructing the required problem-solving sequences. |
| Game Playing | Board is current state Legal Moves are symbols Start with Initial state Well understood goal state |
| Game Trees |
Image:
gametrees (image/png)
|
| Game Playing Problem Space | Chess: 10^120 board configs Table Lookup/exhaustive search not feasible |
| Knowledge Directed Search | STatic Eval Functions Dynamic Look Ahead Tree Pruning Heuristics |
| Chess Status | Very Strong Programs Higher Level Concepts Missing Brute Force Works - 6-8 levels look-ahead |
| AI Paradigm | 1. Identify problem humans solve 2. analyze activities 3. build a conceptual model 4. build a computational model 5. Implement 6. Simulate(Execute) 7. Revise |
| Knowledge Representation Issues | Knowledge: 1 Large/unwieldy 2 Difficult to Articulate 3 Inexact/Incorrect 4 Changes |
| Knowledge Representation Key Factors | Must Be: 1 Effecient 2 Useful 3 Support reasoning/generalization 4 Facilitate Acquisition 5. Understandable by humans |
| KR Approaches | Natural Language Formal Logic Production Rules Semantic Nets Frames, Scripts |
| AI Subdomains | NL Understanding Scene Understanding Multi-Sensor Integration Robotics Expert Systems Machine Learning ... |
| Modern AI Perspective | 1 Agent Oriented 2 Sensors/Effectors - environmental interaction 3 open ended problem solving |
| Agents | Embodied Intelligence Sense-Think-Act Meta Cognition |
| Common Threads | 1 Signal->Symbol 2Mapping 3 General Purpose vs Specialized systems 4 Knowledge Representation 5 Knowledge Acquisition 6 Learning |
| Turing Test | Imitation Game Interrogator tries to figure out which is human, A or B, via conversation |
| Weak AI | Study of algorithms which enable computers to do tasks which previously only humans could perform |
| Strong AI | Pursuit of research leading to the development of facsimiles of the human mind |
| Major Areas of AI | Optimization Induction Deduction Interaction |
| AI Area: Optimization | Process of wandering through an environment, hunting for samples which have the highest quality. |
| AI Area: Stochastic Optimization | Metaheuristics - genetic algorithm, simulated annealing, ant colony optimization |
| LISP | Symbol Manipulation Fundamental Data Objects |
| LISP: Symbols | 1 Manipulated as data 2 Fundamental distinction: Symbol names vs values X, `X, "X" |
| LISP: Symbolic Expressions | Formed from atomic elements and/or s expressions Lists - (a (b c) d) |
| LISP: Underlying Representation | 1-way linked list of CONS cells with CAR, CDR fields (a (b c) d) |
| LISP: list construction | cons, list, append (setf L (cons a b)) (setf L (list 'a 'b)) |
| LISP: list traversal | car, cdr, caar, cadr (print (car L)) |
| LISP - list manipulation | Shared, non-desctructive |
| LISP functions | assume CAR specifies a function and CDR its arguments |
| LISP Program | collection of s expressions to be manipulated or evaluated |
| LISP control flow | IF, COND, LOOP, DOLIST, DOTIMES, PROG, MAP |
| LISP REPL | read evaluate print loop (loop (print (eval (read))) |
| AI Goal | Develop Knowledge Intensive techniques for efficiently constructing the required problem-solving sequences "Heuristics" |
| Cognitive Approach | Build a computational mode of human problem solving |
| CS/Engineering Approach | Build a computationally efficient problem solver (not necessarily modeling human behavior/preference) |
| State Space Search Initial State: | 1. Root Node(of tree/graph) 2. Intermediate Nodes(partial solutions) 3. Leaf Nodes(solutions) |
| State Space Search Goal State | stopping condition. eg Tour of minimal cost in TSP |
| State Space Search Generator | Depth First Breadth First |
| State Space Search TSP | Initial State: 1 visited city N-1 unvisited Goal State: Tour of minimal cost Generator: Depth first by city |
| State Space Search TSP Problems | Size of space: (N-1)! leaf nodes >(N-1)! interior nodes Measuring efficiency: nodes visited Heuristics: 1 Closed Neighbor next(greedy) 2 No-overlapping paths |
| State Space Search Boolean Sat | Initial State: N boolean variables unassigned Goal State: Entire expression evals to true Generator: Depth first by variable |
| General Search Strategies | Data Driven vs Goal Directed Uninformed vs Informed Admissible vs Inadmissible Heuristics |
| Uninformed Search Examples | Depth First Breadth First Depth Limited Iterative Deepening Breadth limited bidirectional |
| Search: Informed Examples | w/ Static eval function Hill Climbing Greedy Prioritized depth first Prioritized breadth first Best first |
| Search: Informed Examples | w/ a future cost estimator A* |
| A* | Total Est Cost = current cost + heuristic estimate Keep "best so far" cost prune if est.clost > bsf cost Admissible if never overestimates future costs |
| Search: Informed Examples | Constraint Violoation Assumes constraints Admissible Efficiency Constraint Issues |
| Search: Informed Look Ahead | Simulate Future Possibilities extract useful information make choices, via pruning |
| Special Search: Adversarial Games | w/ Lookahead capability goal: 1. Sim Future Poss 2. Extract Useful Info 3. Make choices, prune: Minimax Algs Alpha/Beta pruning |
| Game Tree |
Image:
tree (image/png)
|
| Minimax | Assume symmetric static eval function Max values, good for me Min Values, bad for me Opponent always chooses optmial min Exhaustive search to fixed path |
| State Space Approach | Generate & test with pruning |
| Solution State Search | Move around in solution space rather than constructing solutions incrementally |
| Solution Space search TSP | Solution Space: All permutations Move from one perm to another via 2 city swap Segment inversion |
| Solution Space Search Heuristics | Hill Climbing/descent Greedy Ascent/Descent Gradient ascent/descent |
| Special Search Natural Computation | Simulated Annealing Evolutionary Computation Ant Colony Optimization |
| Special Search: Natural Search: Simulated Annealing | Energy levels <==> random jumps initial "temperature high" cool slowly |
| Special Search Natural Computation Evolutionary Computation | Simulate Darwinian Evolution Survival of the fittest Parental fitness -> offspring Offspring inheritance |
| Special Search Natural Logic Ant Colony Optimization | Distributed problem solving Initially parallel random search Pheremone trails |
| Datadrive vs Goal Directed | Forward Chaining vs backward chaining |
| General Search Uninformed vs Informed | Blind vs knowledge guided |
| General Search Admissible vs Inadmissible | Provably correct vs generally correct |
| Uninformed Search Breadth First | all nodes at each level expanded before next level |
| Uninformed Search Breadth First - Pros | Complete |
| Uninformed Search Breadth-First Cons | Time & Space All nodes in memory - at 8 levels deep, need 1TB (1k per node, 10^9 nodes) |
| Uninformed Search Uniform-cost search | Breadth-First search which follows lowest cost node instead of all nodes at a level |
| Uninformed Search Depth-first Search | Extends the deepest node on the current fringe of the tree |
| Uninformed Search Depth-first search Pros & Cons | Low memory Reqmts - only one path need be in memory Can get stuck going down long or infinite path if it makes a wrong choice. Not complete or optimal. |
| Uninformed Search Depth- Limited Search | Set a depth l which we treat as if it has no successors. Use depth-first search but stop at l. depth-first is special case with l=d. |
| Uninformed Search Dept-First Search Pros & Cons | Solves infinite path problem incomplete if l<d (shallowest goal is beyond depth limit) non-optimal if we choose l>d. useful for applying heuristics - like state-space diameter(known max step count) |
| Uninformed Search Iterative Deepening Depth First Search | find the best depth limit by gradually increasing the limit - 0,1,2 etc. Repeating entire search each time. |
| Uninformed Search Iterative Deepening depth-first search | Preferred uninformed search for large search space and unknown solution depth |
| Uninformed Search Iterative Deepening Pros & Cons | Small memory requirements like depth first search, complete and optimal under right conditions like breadth first. |
| Uninformed Search Bidirectional Search | Two simultaneous searches, one forward from init state and one back from the goal, stopping when they meet. |
| Uninformed Search Bidirectional Search Pros & Cons | Complete and Optimal if both searches are breadth first. Hard to know how to work back from goal in many cases. |
| Uninformed Search Strategies Comparison | BF: Complete, Optimal, Memory issues UniformCost: Complete, Optimal, memory issues DF: Not complete, not optimal, small memory Depth-Limited: Not complete, not optimal, smaller memory Iterative Deepening: Optimal, Complete, good memory use Bidirectional: Complete, Optimal, if BF, hard to use |
| Informed Search | Use Problem Specific Knowledge beyond the problem description |
| Heuristic function | Estimated cost of the cheapest path from node n to a goal node |
| Greedy Best First | Expand node that is closest to the goal, according to the heuristic function |
| Greedy Best First Pros & Cons | Minimizing cost is prone to false starts. May choose wrong path just because first step appears to get you closer than another. Much like Depth First search - not complete or optimal. A good heuristic solves all. |
| Informed Search A* | Combines g(n), the cost to reach thenode, and h(n) the cost from the node to the goal. Estimates the cheapest solution through n. |
| A* Heuristic | Complete and Optimal only if h(n) is admissible - that is, it never overestimates the cost to reach the goal |
| Informed Search Hill Climbing | Continually moves in the direction of increasing value. Stops when no neighbor has higher value. |
| Hill Climbing Issues | Get stuck because: Local Maxima Ridges Plateaux |
| Gradient Descent | Minimize cost by seeking minima |
| Simulated Annealing | Start at high temperature(make a lot of random moves) and then decrease temperature until reaching steady state at solution. |
| Solution State Search genetic algoritm | stochastic hill climbing search in which a large population of states is maintained. new states generated by mutation and crossover. |
| Natural Computation Evolutionary Computation | Simulate Darwin Survival of fittest Parent fitness <==> offspring Offspring inheritance/variation(crossover and mutation) |
| Evolution Computation Pseudocode | Create initial population of solutions until stopping condition: select parents based on fitness produce offspring select individuals to die end do return result |
| Minimax algorithm | Complete depth-first search of a game tree in order to backup values from each leaf node, labeling parent nodes with the best possible score on that path |
| Alpha Beta Pruning | Prune nodes which result in a choice the player would never make. For MAX, prune away all but the branch with the highest low score(which MIN will optimally choose). Highly order dependent. |
| KR Types of Knowledge | Declarative(Facts) Procedural(HowTo) |
| Properties of Knowledge | Temporal Extent Context Ambiguities, Conflicts |
| Knowledge Representation Issues | Ease of Acquisition Ease of use in problem solving Ease of Maintenance Handling Noise, inconsistencies Generalization |
| KR Extremes | Natural Language Formal Logics |
| KR - Practical Systems | Rule-based Network Representations Logic Programming |
| Propositional Logic | Constants: T,F Operators: AND,OR,NOT,=> Symbol Manipulation: DeMorgans Law Truth Tables! |
| Truth Table | Specifies truth value for each possible assignment of true values to its components. |
| Modus Ponens | A==>B, a / B If A=>B and a is given, then B can be inferred. |
| And-Elimination | From a conjunction, any conjunct can be inferred. A^B/A. if (A^B) is given, A can be inferred. |
| Biconditional Inference Rules | A<=>B/(A=>B) ^ (B=>A) also the reverse |
| deMorgan's Rule | !(A^B) == (!A || !B) !(A||B) == (!A ^ !B) |
| Satisifiability | A sentence is satisfiable if it is true in some model |
| Propositional Logic Validity | A sentence is valid if it is true in ALL models |
| Propositional Logic Logical Equivalence | A and B are logically equivalent if they are true in the same set of models |
| Propositional Logic Monotonicity | Set of entailed sentences can only increase as information is added to the knowledge base IF KB entails A, then KB ^ B entails A too. "The conclusion of rules must follow regardless of what else is in the knowledge base". |
| Propositional Logic Resolution | Applies only to disjunctions of literals. Uses inference rules to prove a series of statements. |
| Propositional Logic Resolution | Takes two clauses and produces a new clause containing all the literals of the two original clauses except the complimentary literals. (L1 || L2) ^ (!L2 || L3) / (L1 || L3) |
| Conjunctive Normal Form | every sentence of propositional logic is logically equivalent to a conjunction of disjunctions of literals. |
| CNF Sample Conversion | B<=>(P12 | P21) 1. Eliminiate <=> B=>(P12 | P21) ^ (P12 | P21) => B 2. Eliminate => Replace a=>b with !a | B (!B | P12 | P21) ^ (!(P12 | P21) | B) 3. Move NOT inwards so it applies only to literals (!B | P12 | P21) ^ ((!P12 ^ !P21) | B) 4. Distribute OR over AND (!B | P12 | P21) ^ (!P12 | B) ^ (!P21 | B) |
| Forward Chaining Deductive Closure | Facts: A, A=>B ^ C B => R C=> Q Query: Q? Yes |
| Backward Chaining | Facts: A, A=>B &C, B=>R,C=>Q Query: Q? Q?: Q<=C C?: B&C <=A A?: A = yes. |
| KR Using Prop Logic Limitations | Expressiveness - No Variables |
| first order logic | Propositional Calculus plus ForAll and ThereExists Variables, relations, functions. |
| FOL Unification | Agree to allow one variable to act as another. Is Q(Mary) true? If Q(Z) is true, and we unify Z and Mary, then Q(Mary) is true. |
| KR using Pred Logic FOL Example Simplification | Facts: FA m A(m), FAx A(x)=>B(x) ^ C(x) FAy B(y)=>R(y) FAz C(z)=>Q(z) |
| FOL PROS & Cons | Much more expressive Computation Issues |
| Prolog | HORN Clauses Only one variable on right side(no conjunctions) Negation as Failure: If we can't find the answer query is false. Closed World. |
| FOL Quantifiers | Universal Quantification "For All..." |
Want to create your own Flashcards for free with GoConqr? Learn more.