325 Flashcards

1
Q

What are Agents?

A

An agent is a system or entity that perceives its environment and takes actions to achieve specific goals. An agent can be a robot, software, or any other entity that is capable of receiving inputs from its environment through sensors, processing data, and producing an output.

An agent typically operates in a dynamic environment, able to process its environment internally and make decisions or take actions based on its goals or objectives. These actions may involve modifying the environment, interacting with other agents, or performing specific tasks.

Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f ∶ 𝒫∗ → 𝒜

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a Reflex Agent?

A

Simple reflex agents make decisions based on the information it receives from its sensors at any given moment. The agent does not consider the history of past percepts or the potential future consequences of its actions.

  • The agent has sensors to perceive its environment. These sensors can be physical devices or inputs from a computer system.
  • The agent follows a set of condition-action rules, known as production rules or IFTHEN rules. Each rule specifies a condition to be met in the percept, if a condition is met, a corresponding action is taken.
  • The agent has actuators to perform actions in the environment based on the conditions met. Actuators can be physical devices such as motors, or software components that interact with the system.

They essentially only react to given stimuli without any consideration of what previous stimuli was.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Model-Based Reflex Agent?

A

Model-based reflex agents consider the current percept as well as the internal state, which they update based on the history of percepts and actions. They maintain an internal model of the environment to make more informed decisions.

  • Like simple reflex agents, model-based reflex agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
  • Additionally, model-based reflex maintains an internal model of the environment. This model is an abstract representation of the world that captures relevant aspects, relationships and dynamics. This allows the agent to simulate or predict the consequences of its actions on the environment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Goal Based Agent

A

Goal-based agents have explicit goals or objectives and take actions based on their current state and the desired goal state. They make decisions by considering the available actions and the expected outcome or utility of those actions.

  • Like simple reflex agents, goal-based agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
  • The agent has explicit goals that define the desired state it aims to achieve.

It possesses knowledge about its environment, available actions, and the potential consequences of its actions. It uses reasoning mechanisms, such as logical or probabilistic reasoning, to evaluate different actions and their likelihood of achieving
the desired goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are BDI Agents?

A

Belief-Desire-Intention agents model human-like reasoning and decision-making processes. BDI agents aim to capture the cognitive aspects of human behaviour by incorporating beliefs, desires and intentions as fundamental concepts.
- Beliefs represent the agent’s knowledge about the world. These beliefs can include facts about the environment, the agent’s internal state, the states of other agents, and other relevant information. Beliefs are typically represented as a set of propositions or statements.
- Desires reflect the agent’s goals; they represent what the agent wants to achieve or the states of the world it finds desirable. Desires can range from simple goals to complex preferences and can be hierarchal with desires having subgoals and dependencies.
- Intentions represent the agent’s selected course of action to achieve its goals. Intentions are formed based on the agent’s beliefs and desires. An intention is a commitment to perform a specific action or set of actions and is influenced by the agent’s beliefs about the current state of its environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are Utility Based Agents?

A

Utility-based agents assign utilities or values to different states and actions, enabling them to make decisions based on maximising expected utility. They consider not only the goal but also the potential outcomes and their desirability.

  • Like simple reflex agents, utility-based agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
  • The agent has a utility function that quantifies the desirability associated with different states of the world. The utility function maps each state to a numerical value representing its utility.
  • A utility-based agent may incorporate learning mechanisms to improve its decision-making over time. It can learn from the outcomes of its actions and adjust its utility function.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are Learning Agents?

A

Learning agents have the ability to learn from their interactions with the environment. They can acquire knowledge, update their internal models, and improve their decision-making abilities over time.
- The critic component of the agent evaluates the performance of the agent by providing feedback on how well the agent is doing. The feedback is used by the learning element to update its knowledge and improve future decision-making.
- The learning element is responsible for acquiring new knowledge or skills based on the available feedback. It uses a learning algorithm and techniques to analyse the data and update its internal representation or model of the environment.
- The performance element interacts with the environment through the actuators, takes actions, and makes decisions based on the acquired knowledge. The performance element can be guided by the learned knowledge to achieve its goals.
- The agent will balance between exploration and exploitation, that is to gather information for its learned knowledge and be able to use the learned knowledge to achieve its goals more effectively.
- Learning agents typically learn through reinforcement learning, which involved receiving feedback based on the outcomes of their actions. The agent seeks to maximise cumulative feedback over time by adjusting its behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an Expert System?

A

Expert systems are a type of artificial intelligence that aim to emulate the knowledge and reasoning capabilities of human experts in a specific domain. It is designed to provide specialised knowledge in a particular field, allowing it to solve complex problems and make informed decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a Knowledge Base in the context of an Expert System?

A

The knowledge base is a repository that stores domain-specific knowledge. It contains facts, rules, heuristics, and relationships relevant to the problem domain.

The knowledge base can be represented as a series of IF-THEN rules: if condition A then conclusion B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is an Inference Engine in the context of an Expert System?

A

The inference engine is the reasoning component of the expert system. It utilises the knowledge stored in the knowledge base to draw conclusions, make inferences, and answer queries or solve problems. It makes use of reasoning mechanisms, such as rule-based reasoning, logical reasoning, or probabilistic reasoning to process the available knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a User Interface in the context of an Expert System?

A

the user interface allows users to interact with the expert system, posing questions, providing input, and receiving responses. The interface can be text-based, graphical, or even voice-based, depending on the design and purpose of the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the features of Rules in an Expert System?

A

Modularity: each rule defines a relatively independent piece of knowledge
Incrementality: new rules added relatively independently of other rules
Modifiable and Transparent: can see what goes wrong or needs addition, has explicit rules

Can represent uncertainty
Chaining of rules model extensive reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Forward Chaining in the context of an Expert System?

A

Forwards chaining is a bottom-up approach where the inference engine starts with the available facts and applies rules to derive new conclusions. It works by matching the conditions of the rules with the known facts and inferring new facts and conclusions. This process continues until no more rules can be applied or until the goal is reached.

Example:
- Rule 1: IF A and B THEN C
- Rule 2: IF C THEN D
- Facts: A is true, B is true
1. The inference engine matches Rule 1’s conditions (A and B) with the known facts that A and B are true.
2. The inference engine matches Rule 2’s condition (C) with the new fact that C is true and infers D as a new fact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Backwards Chaining in the context of an Expert System?

A

Backwards chaining is a top-down approach where the inference engine starts with a goal and works backwards, finding rules and facts that support the goal. It recursively applies rules by matching the conclusions of the rules with the goal until it reaches a set of known facts.

Example:
- Rule 1: IF A and B THEN C
- Rule 2: IF C THEN D
- Facts: D is true
1. The inference engine starts with goal D and searches for rules that have D as a conclusion.
2. It finds Rule 2 that has D as a conclusion, and checks if its conditions (C) can be satisfied.
3. It finds Rule 1 that has C as a conclusion, and checks if its conditions (A and B) can be satisfied.
4. It verifies whether the known facts satisfy the conditions of Rule 1.
5. If the conditions are satisfied, the inference engine concludes that D is true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the different problem types?

A
  • Deterministic, Fully Observable → single-state problem
    —- Agent knows exactly which state it will be in; solution is a sequence
  • Non-Observable → conformant problem
    —- Agent may have no idea where it is; solution (if any) is a sequence
  • Nondeterministic and/or Partially Observable → contingency problem
    —- Percepts provide new information about the current state
    —- Solution is a contingent plan or policy
    —- Often interleave search, execution
  • Unknown State Space → exploration problem (“online”)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Graph Traversal?

A

Graph and tree search algorithms are fundamental techniques problem-solving agents use to explore and navigate through problem spaces to find solutions. Both algorithms systematically explore the search space by expanding nodes and traversing edges. The nodes represent states or configurations, the edges represent actions or transitions between
states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are 3 graph search algorithms?

A

Graph search algorithms operate on a graph structure. They move between states, maintaining a data structure of visited nodes allowing them to effectively handle repeated states and loops. Common graph search algorithms include:
- A* Search
- Dijkstra’s Algorithm
- Greedy Best-First Search
These algorithms consider both the cost to reach a state and an estimated heuristic value to guide the search towards the goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are 3 tree search algorithms?

A

Tree search algorithms operate on tree structures. They explore the search space by expanding child nodes and systematically searching the tree until a goal state is reached.

Common tree search algorithms include:
- Depth-First Search (DFS)
- Breadth-First Search (BFS)
- Uniform-Cost Search (UCS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do we measure problem solving performance?

A
  • Completeness: Is the algorithm guaranteed to find a solution?
  • Optimality: Does the strategy find the optimal solution?
  • Time Complexity: How long does it take to find a solution?
  • Space Complexity: How much memory is needed to perform the search?

Time and space complexity are measured in terms of:
- b – maximum branching factor of the search tree
- d – depth of the least-cost solution
- m – maximum depth of the state space (can be infinite)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Measure Breadth-First Search

A

Completeness - Yes (if b is infinite)
Optimality - Yes (if cost = 1 per step); not optimal in general
Time Complexity - O(b d+1) , exponential in d
Space Complexity - O(b d+1) , keeps every node in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Measure Uniform-Cost Search

A

Completeness - Yes, if step cost ≥ ε
Optimality - Yes – nodes expanded in increasing order of g(n)
Time Complexity - Number of nodes with g ≤ cost of optimal solution, O(b⌈C/ε⌉)
Space Complexity - Number of nodes with g ≤ cost of optimal solution, O(b⌈C
/ε⌉)

22
Q

Measure Depth-First Search

A

Completeness - No, fails in infinite-depth spaces, spaces with loops; complete in finite spaces
Optimality - No
Time Complexity - O(b m) – terrible if m is much larger than d, but if solutions are dense, may be much faster than breadth-first
Space Complexity - O(bm) , linear space

23
Q

Measure Depth-Limited Search

A

Completeness - No
Optimality - No
Time Complexity - O(b ℓ)
Space Complexity - O(bℓ)

24
Q

Measure Iterative Deepening Search

A

Completeness - Yes
Optimality - Yes, if step cost = 1, can be modified to explore uniform-cost tree
Time Complexity - O(b d)
Space Complexity - O(b d)

25
Q

What is Hill Climbing Search?

A

The hill climbing search searches for the local maximum and returns the peak as its answer. Guess x∈ X where X is all possible values to choose from. Check the neighbouring values either side of x and move towards the value that would return a greater result. Continue this until a peak is reached, it may not be the highest possible peak, but it is a heuristic estimate for the peak value.

26
Q

What is A* Search and its measure?

A

To calculate the optimal path from the start node to the goal node, use f(n) = g(n) + h(n)
Where:
- f(n) is the final cost to travel from the start node to current node
- g(n) is the total cost of the path from start node to current node
- h(n) is the heuristic cost for the current node

When the cost of the children nodes of the currently expanded node exceeds the cost of a prior unexpanded node, the algorithm backtracks and expands this node. This differs from the similar Greed-First search, which simply expands the cheapest node of the expanded node’s children with no backtracking.

Completeness - Yes, unless there are infinitely many nodes with f ≤ f(G)
Optimality - Yes, cannot expand fi+1 until fi is finished
Time Complexity - Exponential
Space Complexity - Keeps all nodes in memory

27
Q

What are the Probability operations?

A

P(A) is the probability of A
- P(+A) or P(¬A) is the negation of A
— P(+A) = 1 – P(A)
- P(A˄B) is the AND relation of A and B
— Independent Events: P(A˄B) = P(A) ∙ P(B)
— Dependent Events: P(A˄B) = P(A|B) ∙ P(B)
- P(A˅B) is the OR relation of A and B
— Mutually Exclusive Events: P(A˅B) = P(A) + P(B)
— Not Mutually Exclusive Events: P(A˅B)=P(A)+P(B) – P(A˄B)

28
Q

What is Conditional Probability?

A

Conditional probability is a concept that measures the probability of an event occurring given that another event has already occurred. It provides a way to update probabilities based on additional information or conditions.

The conditional probability of event A given event B is denoted as: 𝑃(𝐴|𝐵)
And is defined as: 𝑃(𝐴|𝐵) = 𝑃(𝐴 ⋀ 𝐵)/𝑃(𝐵)

Where P(A ˄ B) represents the probability of both events A and B occurring together, and P(B) represents the probability of event B occurring.

29
Q

What is Independance in the context of Probability?

A

Independence refers to the concept that one event does not affect the probability of another
event.

Two events A and B are independent iff: 𝑃(𝐴 ∧ 𝐵) = 𝑃(𝐴) ⋅ 𝑃(𝐵)

If the equation holds true, it implies that the occurrence of A has no impact on B and vice versa

30
Q

What is Baye’s Rule?

A

Bayes’ Rule is a principle in probability theory that allows for updating probabilities based on new information. It provides a formula to calculate the conditional probability of an event given prior knowledge.
Bayes’ Rule states: 𝑃(𝐴|𝐵) = (𝑃(𝐵|𝐴) * 𝑃(𝐴)) / 𝑃(𝐵) —– if P(B)≠0

Where:
- P(A | B) is the conditional probability of event A given event B
- P(B | A) is the conditional probability of event B given event A
- P(A) and P(B) are the prior probabilities of events A and B
Bayes’ Rule provides a way to update probabilities as new evidence becomes available. It enables us to revise our beliefs or hypotheses based on observed data or additional information. By incorporating prior knowledge and updating it, we can make more accurate inferences and predictions.

31
Q

What is a Bayesian Network?

A

Bayesian networks are simple graphical notations for conditional independence assertations. In a Bayesian network, variables are represented as nodes, and the relationships between them are depicted as directed edges. The directionality of the edges represents the conditional dependencies between variables.

These are typically represented with tables depicting probabilities with +A and -A (so the odds

32
Q

What is Uncertainty?

A

Uncertainty refers to situations where there is incomplete or imperfect information about the state of the world or the outcomes of certain events. It reflects the lack of certainty or knowledge regarding the true probabilities or outcomes associated with different events or actions.

33
Q

How does logical operators affect uncertainty and probabilities?

A

For conditional statements: the resulting probability is the product between the condition’s certainty and the probability of the rule.
For conjuctive rules (AND): the resulting probability is the minimum of the two.
For disjunctive rules (OR): the resulting probability is the maximum of the two.

34
Q

Name the 4 types of games

A

Deterministic vs Chance games
and
Observable vs Partially Observable

Deterministic games have no probabilities to weigh when making decisions.

Observable games allow the game state to be visible to everyone. (Town of Salem/Mafia would be partially observable games)

35
Q

Whats a Zero-Sum Game?

A

In zero-sum games, the total utility or payoff remains constant throughout the game, the gain of one player is exactly balanced by the loss of the other player(s).
1. Constant Sum: The sum of the utility of all players remains constant throughout the game.
2. Competitive Nature: Zero-sum games are typically competitive, with players having conflicting interests; the goal of each player is to maximise their own utility while minimising the opponent’s utility.
3. Symmetry: Zero-sum games exhibit symmetric utility structures; the gains and losses of each player are mirrored.
4. Pure Strategies: Players typically choose from a set of pure strategies, rather than using randomised or mixed strategies

36
Q

What’s a Minimax Algorithm?

A

The minimax algorithm is used in two-player, zero-sum games to determine the optimal strategy for a player. It explores the game tree, considering all possible moves and their outcomes, assigning values to each state.

The algorithm aims to maximise the player’s payoff while minimising the opponent’s payoff.
1. Generate Game Tree: start from the current state of the game and generate a tree of all possible moves and their resulting states.
2. Assign Values to Terminal States: for each leaf node of the tree, assign a value to represent the payoff or utility of that state for the player.
3. Minimise and Maximise: move up the tree from the leaf node to the root, alternating between minimising and maximising. At each level the player maximises their potential payoff, assuming the opponent will make moves to minimise the player’s payoff.
4. Backpropagation: propagate the values up the tree, updating the values of parent nodes based on the values of their child nodes. If it is the player’s turn, select the child node with the maximum value, else select the child node with the minimum value.
5. Make the Best Move: once the values have been propagated to the root node, select the move corresponding to the child node with the highest value. This move represents the optimal strategy for the player.

37
Q

What’s Alpha-Beta Pruning?

A

Alpha-beta pruning is an optimisation technique used in conjunction with the minimax algorithm to reduce the number of nodes evaluated during the search. It exploits the observation that, in certain cases, it is unnecessary to explore all possible moves because some moves will never be chosen.
This algorithm works by maintaining two values: alpha and beta.
- Alpha represents the best (maximum) value that the maximising player has found so far
- Beta represents the best (minimum) value that the minimising player has found so far
As the search progresses, if the algorithm finds a move that guarantees a worse outcome than a previously examined move, it prunes the rest of the branch, as the opponent would never choose that move

38
Q

Describe Planning within the context of Artificial Intelligence

A

Planning is the process of generating a sequence of actions to achieve a specific goal. It involves reasoning about actions, their effects, and the state of the world to create a plan that leads from an initial state to a desired goal state.
Planning can be used in problem-solving agents that operate in dynamic and uncertain environments. It allows agents to autonomously determine a course of action to achieve a desired goal, considering the current state, available actions, and the anticipated consequences of those actions.

The process of planning can be represented in the following steps:
1. Representation of the problem
2. Search space exploration
3. Action selection
4. Plan generation
5. Execution and monitoring

39
Q

What is Classical Planning (beethovens theme)

A

In classical planning we assume the world is completely observable and all action’s effects are deterministic (i.e., completely predicable, no uncertainty).
Any changes to the world will only occur as a result of the agent’s actions and not from its own actions. We also assume that actions are immediate, with no duration to complete and with time being reflected in the order of actions.

40
Q

What is Means-End-Planning?

A

Means-end planning is a problem-solving approach that involves breaking down a given problem into subgoals and finding a sequence of actions to achieve those subgoals, leading to the desired goal. It focuses on bridging the gap between the current state and the desired state by identifying and executing a series of actions.
1. Goal Analysis – define the overall goal that needs to be achieved. This goal can be decomposed into smaller subgoals.
2. Current State Analysis – current state of the system is analysed to determine the differences between current state and goal state.
3. Action Selection – planner selects actions that can potentially bridge the gap between the goal state.
4. Plan Generation – generate a plan of a sequence of actions that when executed can achieve the subgoals and lead towards the desired goal state.
5. Execution – execute the plan by performing the actions in the specified order. Each action modifies the current state of the problem.
6. Plan Refinement – if any unforeseen obstacles or changes occur during execution, the plan needs to be refined and adjusted accordingly. This can involve re-evaluating the current state, reassessing subgoals, and/or selecting alternative actions.

41
Q

What is STRIPS planning?

A

In STRIPS planning, the world is modelled as a set of states and the agent’s actions can transition it from one state to another.
1. States – represent the current configuration of the world, it is typically represented as a set of propositions that are true in that state.
2. Actions – represent the agent’s ability to change the state of the world. Each action has preconditions (conditions that must be true before the action is executed) and effects (conditions that will be true after the action is executed).
3. Goals – define the desired goal state that the agent wants to achieve, they are typically specified as a set of propositions that should hold in the final state. STRIPS planning starts from an initial state and incrementally applies actions to achieve the goal state. It uses a search algorithm, such as DFS or BFS, to explore the space of possible actions. STRIPS planning follows a forward-chaining approach, selecting an applicable action whose preconditions are satisfied by the current state, applying the action to produce a new state and repeating this process until the goal state is achieved.

42
Q

What are Facts in Prolog?

A

Facts: represent basic statements about the specific domain. They are atomic pieces of information that are considered to be true. Facts are expressed as predicates with a specific arity.

43
Q

What are Rules in Prolog?

A

Rules: are logical statements that define relationships or conditions based on which other statements can be inferred. Rules consist of a head and body, separated by :-. The head represents the inferred statement, while the body contains the conditions.

44
Q

What are Predicates in Prolog?

A

Predicates: are named procedures or relations that consist of one or more clauses. They represent facts or rules and are defined by their name and arity.

45
Q

What are Queries in Prolog?

A

Queries: allow you to interact with a Prolog program. A query is a question or goal that is posed to the system, asking it to find solutions or prove certain statements based on defined facts and rules.

46
Q

What is a Cut in Prolog?

A

The cut is a Prolog predicate which offers a way to control backtracking. The cut has no arguments so is written as !/0 (the /0 referring to 0 arguments). The cut is a goal that always succeeds, it commits Prolog to the choices that were made since the parent goal was unified with the left-hand side of the clause containing the cut.

Example: p(X):- b(X), c(X), !, d(X), e(X).
Cut can be used to resolve potential inefficiency. If there are multiple clauses to a predicate then cut can be used to commit to the first choice if it succeeds, preventing Prolog from considering alternative choices if they’re not needing to be considered.
- Green Cuts: does not change the meaning of a predicate
- Red Cuts: does change the meaning of a predicate

47
Q

What is a Fail in Prolog?

A

Fail is used to indicate that a goal will always fail. It is used to explicitly state that a certain condition or rule cannot be satisfied and forces the Prolog interpreter to backtrack and search for alternative solutions. It’s simply written as fail/0 (or fail directly in code).

48
Q

What is Negation as Failure in Prolog?

A

Negation as failure is a concept that states that the absence of a fact or condition is inferred based on the failure to find evidence to support it.

In Prolog, negation as failure is denoted as +/1 which is used to express logical negation. It can be used in combination with rules and queries to express conditions that are true unless proven false.

49
Q

What are Grammars in Natural Language Processing?

A

A grammar is a set of rules that define the structure and syntax of a language. It provides a framework for generating and analysing sentences according to the language.
- A set of variables (also called non-terminals), one of which is designated as the start variable.
- A set of terminals (from the alphabet)
- A list of productions (also called rules)

50
Q

What are Context Free Grammars?

A

Context-free grammar is a type of grammar that consists of a set of production rules that define how non-terminals can be replaced by other non-terminals. The left-hand side of a production rule represents a non-terminal symbol, and the right-hand side specifies the replacement options for that non-terminal.

Example
1. s -> np vp
2. np -> det n
3. vp -> v np
4. vp -> v
5.
6. det -> the
7. det -> a
8. n -> man
9. n -> woman
10. n -> shoots

The → symbol is used to define the rules
- The symbols s, np, vp, det, n, v are nonterminal symbols
- These symbols: (the, a, man, woman, shoots) are terminal symbols.

51
Q

What are Definite Cause Grammars?

A

Definite clause grammars are a notation for writing grammars that hides the underlying difference list variables.

Standard List manipulation;
1. s(C):-np(A),vp(B),append(A,B,C).
2. np(C):-det(A),n(B),append(A,B,C).
3. vp(C):-v(A),np(B),append(A,B,C).
4. np(C):-v(C).
5. det([the]). det([a]).
6. n([man]). n([woman]). v([shoots]).

DCG;
1. s –> np,vp.
2. np –> det,n.
3. vp –> v,np.
4. vp –> v.
5. det –> [the]. det –> [a].
6. n –> [man]. n –> [woman]. v –> [shoots].