121 Flashcards

(130 cards)

1
Q

What is predicate logic?

A

encompasses all of propositional logic + expands on its limitations - accounts for parts of propositions, links between parts of propositions + quantifiers

split into -
syntax - notation for concepts + operators used to create formulae
semantics - meaning behind formulae

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain concepts:

A

terms - similar to subjects, expressed through nouns/pronouns, written as the lower case of the 1st letter
- argument of the predicate so
written as P(a)

predicates - properties/relations among terms, expressed through verbs, written as the upper case of the 1st letter
properties = “is property”
relations = “is the relation of”/”are related”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the types of operators in predicate logic?

A

connectives + quantifiers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an atomic formula?

A

consists of 1 predicate and 1<= terms
functions that take 1+ arguments (constants/variables) and returns true or false (Boolean)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a variable? What is a constant?

A

var = represents general terms (individuals of the same type or a class rather than specific individuals), written with lower case letters from end of alphabet

takes values from the universe of discourse - set of all entities that can replace a variable in an atomic formula

constant = a term desc. specific individual entities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Atomic formulae can be…

A

closed/ground formulae - predicate’s arguments are only constants and so are propositions with truth values

                                                         OR

open/unground formulae - predicate’s arguments consist of at least 1 variable and so are not propositions as they have no truth value

AND
have arity in sense of no. of vars (unary, binary, n-ary)
for 0 arity = proposition, 1 arity = properties, 2+ arity = relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 2 quantifiers in predicate logic?

A

universal - every/”for all” (equivalent to a conjunction with all the variables) - denoted by upside down A
negation = not all

existential - at least one/”there exists” (equivalent to a disjunction with all the variables) - denoted by a backwards E
negation = none

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the rules of inference?

A

all of propositional plus…
- universal/existential instantiation
- universal/existential generalisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is universal + existential instantiation?

A

universal - if it is true for a variable it is true for a constant of that variable

existential - if it is true for a constant it is true for a constant of that variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is universal + existential generalisation?

A

universal - if it is true for any arbitrary x it is true for every x

existential - if it is true for a constant it is true for a constant of that variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between restricted + unrestricted quantifiers?

A

unrestricted quantifiers - all elements in the Universe of Discourse - use conjunction
e.g “Every man has 2 legs”
Ax (M(x) -> L(x))

restricted quantifiers - some elements in the Universe of Discourse (a subset) - use implication
e.g “There is a man who has 2 legs”
Ex (M(x) AND L(x))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What functions + properties are needed for a queue?

A

head + tail pointers
enqueue(item) - if head null makes item head, else goes till tmp.next == null then makes item next
dequeue - return head and move rest up
isEmpty()
isFull() - if bounded

int data
element next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Pros + Cons of 2D arrays

A

+ random access - very fast

  • fixed size (can only grow by copying elements - costly)
  • insertion + deletion is costly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What properties + functions does a stack need?

A

top pointer
isEmpty()
isFull() if bounded
push(stack, x) - makes x top if stack isEmpty, else makes x top and current top x.prev
pop(stack) - check isEmpty, if not return top + make top stack.top.prev

element prev
int data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

iteration vs recursion

A

iteration :
+ memory usage is controlled explicitly by the programmer - stack overflow is less likely
+ can execute more quickly - no overhead from stack frame creation/destruction
- naturally recursive functions can be harder to understand in iterative form

recursion :
+ naturally recursive functions are much more concise when recursively expressed
+ languages which support tail recursion can eliminate some of the extra costs to performance + stack overflow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is indexed retrieval?

A

using an identifier (unique object attribute) to store + retrieve each object/record

  • useful as allows us to perform binary search (objects sorted by key)
  • can make searches up to 26x faster (when ordered alphabetically) but if all objects in 1 section then usual TC
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a problem with storing objects? And how can we solve this?

A

collisions
solved by indexed retrieval (recomputes index array + shifts to ensure room) + chains (recomputation + dynamic use of pointers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Efficiency of chains:

A

much faster (where objects distributed across many chains)
- same improvement as w/ using arrays

worst case - every object is in the same chain so O(N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain hash tables:

A

data struct. that converts a key (unique identifier) to an int, using has function,
which becomes the index for storing the key-value pair

designed to be fast to work w/
- when few collisions = O(1)

ℎ:𝑈 →{0,1,2,…,𝑚 −1} - h = hash function, U = universe of keys, m = array size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What makes a hash function good?

A
  • produces a wide range of index values (helps minimise collisions)
  • unform hashing - each key is equally likely to map to an array location
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What operations are helpful for hashing?

A
  • multiplication/division = helps create a wider range of indexes
  • MOD - can % by array size to ensure index is within array bounds (helpful after multiplication)
  • logical shifts + OR operations - can replace multiplication/division for better speeds
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

hash table as an ADT:

A
  • stores <k,v> pairs (where keys are unique + a handle to the associated value
  • put (t, <k,v>) - to store <k, v> in table t
  • get (t, k) - returns <k,v> if k is in t
  • delete (t, k)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Explain rehashing:

A

used to make the table efficient again once chains get too long

create a new hash table at least double the current size
move all the key-value pairs over

though invisible to the ADT user, it can be an expensive operation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Define an algorithm:

A

set of computational steps that transform the input → the output
can be represented by code, flow charts, sentences, punch cards + pseudocode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a greedy algorithm?
one that optimises for the current situation/moment without care for the best option for the future non-optimal but correct
26
What are the complexities? And what are they important for?
time complexity - no. of steps relative to input size may also be measured as actual running time (not as accurate) space complexity - amount of memory required to run an algorithm to completion space + time = bounded resources these are important when designing algorithms as well as correctness, robustness + simplicity
27
What is space complexity separated into?
fixed part - memory required to store certain data independent of the size of the problem e.g. constants variable part - space needed by variables so dependent on problem size e.g. user inputted variables
28
What does runtime depend on (time complexity)?
input size + input organisation we look at this theoretically to avoid influence of optimal operating temp. + no. of processor cores
29
What is the experimental approach?
writing a program to implement the algorithm and run w/ inputs of varying size/composition + plot the results - implementing algorithm may take a long time and be very difficult - results may not be indicative for runtime on other inputs - to compare 2 algorithms must use same software + hardware better to estimate actual runtime theoretically by analyising algorithms
30
What is operation counting?
count operations (assignment, comparison, incrementing, maths all would be 1 op respectively) allows analysis of algorithm itself, independent of language (only when you ignore minor details + assume each elementary operation takes a fixed time to execute)
31
List the different time complexities?
constant - time taken independent of input size - T(1) linear - time taken directly proportional to input size - T(n) logarithmic - time taken logarithmically proportional to input size - T(log n) quadratic - time taken dp to square of input size - T(n^2)
32
What is a logarithm?
opposite of to the power of what power (c) do you have to raise b by to get a ex.) 2^3 = 8, log2(8) = 3
33
How does a sentinel search improve a linear search?
checks if searched value is at end of array and if not places searched value there eliminates need to check if within bounds of array as value since technically always be "found" T(N) = 2N + 3 - sentinel worst case T(N) = 3N + 3 - linear worst case
34
Time complexity of a binary search:
worst case - T(N) = C1 + C2log2N - logarithmic best case - T(N) = C1 - constant
35
How is growth of functions relevant to time complexity?
to be considered better an algorithm has to be scalable not just a better T(N) for one input
36
What time complexities are best and worst for function growth?
best = constant, logarithmic, n^fractional num, linear, n log n + quadratic worst = factorial, exponential, polynomial, cubic + n^2 log n
37
What is the difference between polynomial and exponential growth?
exponential = int^n polynomial = n^int (anything higher than quadratic or cubic)
38
What is big-O notation?
the upper boundary for the growth of a function T(n) where f(n) specifies the boundary
39
What does O(f(n) mean?
O = the big-O notation f(n) = function of input size n so show that T(n) is O(f(n)) means T(n) = O(n)
40
What tells us the big-O notation w/o all the maths?
picking out the dominant term from the T(N) e.g. T(N) = C1N + C0 dominant term = N = linear O(N)
41
Why if T(n) = O(n^2) does it also equal O(n^3)?
in practice we take the smallest simple function but technically is equal to anything that grows faster
42
How do you find the TC of nested loops? How do you find the TC of sequential instruc.?
nested = multiply the loop's complexities together sequential = take dominant term of the sequence (highest val.)
43
What is a non-independent loop? How do you find the time complexity of one?
inner loop relies on counter of outer loop, e.g. for (i = 0; i < n; i++){ for (j = n; j < i; j++){}} can plug in values (dry run) then sum up the values + evaluate OR use the general summation rule for for loops
44
What does the summation of i = 1 to N with the rule i actually equal? and then with the rule 1 instead?
rule i = (N(N+1) / 2 rule 1 = N - increments 1 N times
45
What is the summation rule for for loops?
summation of i = a to b with the rule 1 b - a + 1 (increments from a to b then one additional time as much be a <= format)
46
What is big omega notation?
lower bound - T(n) ≥ M×f(n) for all n ≥ n₀ grows no slower than f(n) - ≥ find simplest so also big Ω of anything that grows slower
47
What is big theta notation?
tight bound - M1×f(n) ≤ T(n) ≤ M2×f(n) for all n ≥ n₀ combo of other 2 - grows no slower + no faster than f(n) - == proved by proving big O and big omega - T(n) ∈ O(n) AND T(n) ∈ Ω(n) → we can say T(n) ∈ Θ(n)
48
How do you prove big omega?
replace anything that isn't the dominant term with a 0 then add up to find M find n0 by getting equal to the n term and dividing to get n alone e.g. given T(n) = 3n + 4, f(n) = n 3n ≥ 3n (equal), n ≥ 0 3n + 4 ≥ 3n + 0 when n ≥ 0 M = 3 (given the 3n) and n0 = 0 therefore T(n) ∊ Ω(n)
49
How do you prove big O?
get n0 by collecting non-dominant terms + dividing them to get n alone get M by multiplying non-dominant terms by the dominant term and adding them up e.g. given T(n) = 3n + 4, f(n) = n T(n) = 3n + 4 ≤ 3n + 4n so T(n) = 3n + 4 ≤ 7n 4 ≤ 4n → divide by 4 → 1 ≤ n M=7 and n0=1 → T(n) ≤ 7n for all n ≥ 1 therefore T(n) ∊ O(n)
50
What is a recursive algorithm?
an algorithm which calls itself w/ smaller input values obtains the result for the current input by applying simple operations to the returned smaller input
51
How do you find the recurrence relation for recursive algorithms?
find constant time complexity (no recursion) and write T(1) = T(n) of the one operation (probs constant so can just write 1) then do T(n) for prior constant + recursive call
52
How can we evaluate recursive time complexity?
- back substitution - recursion tree - master theorem
53
How does back substitution work?
replace recurrent relation w/ other equivalent options - e.g. T(n-1) + 1 = T(n-2) + 2 or T(n/2) + 1 = T(n/4) + 2 then find pattern of substitution - e.g. T(n-k) + k or T(n/2^k) + k apply this pattern to T(1) to reach final theta notation - e.g. if k = n -1 and T(n) = T(n-k) + k = T(1) + n -1 = 1 + n -1 = n therefore T(n) = n thus Θ(n)
54
How does a selection sort work?
find smallest value in input + extract it into new array repeat till all items are in new array in-place = find smallest value in input + swap w/ 1st value of array repeat till index is at end of array (change 1st to next index as each item is added)
55
What does an in-place sort mean?
sorts the items within the array that was inputted instead of creating a new array to help save memory O(1) space complexity!!
56
What is the time complexity of a selection sort? and an insertion sort?
O(n^2) in all cases for both!!
57
How does an insertion sort work?
move elements from input to new array in list order once added check if the element is > than items alr in list + move accordingly in-place = compare values in the list order + swap accordingly e.g. 1st > 2nd ? 2nd > 3rd? etc.
58
How does a merge sort work?
recursively splits array ((start index + end index + 1) /2) into subarrays till there is just 1 item per array then merges subarrays together by comparing the heads of each no in-place version
59
How does a quick sort work?
pick a pivot (point/value) divide array into 3 subarrays, one w/ value smaller than the pivot, one w/ greater + one just w/ the pivot then do merge sort on the subarrays to get 1 sorted array
60
What is the time complexity of a merge sort? and a quick sort?
O(nlogn) in all cases - merge O(nlogn) best, avg + O(n^2) worst - quick
61
Define a tree:
non-linear ADT that stores elements hierarchially as nodes connected by edges w/ NO cycles or loops used for file directories. XML/HTML, compilers + collision detection
62
What are the relationships within a tree?
root = top node (no parents) children = node w/ parents leaf = node w/ no children descendants/ancestors - relative of relative of
63
What is the height of a node? then height of a tree?
no. of nodes of longest path from given node to some leaf height of its root/depth of its deepest nodes
64
What is the depth of a node? What is the diameter/width of a tree?
depth = number of nodes from given node to root diameter = no. of nodes on longest path between any 2 leaves (max. distance where dist. = no. of edges in shortest directed path between 2 given nodes)
65
How many edges does a tree w/ n nodes have and why?
n-1 edges as n nodes each have 1 edge going to them other than root
66
What is a subtree?
taking a node + its descendants from a full tree basically part of a tree
67
What is a k-ary tree?
tree that imposes a max no. of children to each node e.g. binary = max 2
68
What is a balanced tree?
for any node, the height of its child subtrees differ by ≤ 1
69
What functions would a tree ADT class have?
void add (Object new, Object parent) Tree find(Object val) - used in move + remove Tree remove(object n) void move(Object n, Object parent)
70
What happens to the children of moved/removed nodes?
children become the children of that node's parents instead
71
What do tree traversal algorithms do?
used to enumerate all elements in a tree or decide in which way we look through a tree to find elements
72
How does pre-order, post-order + in-order traversal work?
dot to left (CLR) - pre dot to right (LRC) - post dot below (LCR) - in all based on a depth-first search
73
How does breadth-first traversal work?
traverse tree from left to right at each level from the root using a queue (where a level is like root then all of that roots children then all of their children etc.)
74
Give some examples of index systems:
DNS lookup compiler packet routing table dictionary
75
What methods does a dictionary ADT inc.?
void insert (key k, value v) void delete (key k) value find (key k)
76
What are common implementations of a dictionary ADT?
sorted arrays, search trees + hashing
77
What is the difference between using sequential allocation or sorted arrays (dictionary ADT)?
sequential - insert = O(1), find + checking for duplicates = O(n) sorted - find = O(logn), insertion + deletion = O(n) given sorting
78
What is a binary search tree?
binary tree where each node has a k,v pair + nodes are sorted by their keys to be sorted, every nodes’ key must be ≥ all keys in its left subtree + strictly < than all keys in its right subtree can be used to implement dictionaries
79
What functions does a BST ADT need?
delete() insert() find() - if smaller (go left), if bigger (go right) same functions as a dictionary ADT
80
Why do BSTs need to be balanced?
finding is very efficient on balanced trees but not degenerate/unbalanced search (and naïve insertion + deletion) = O(h) where unbalanced h may be O(n) but balanced height must be O(log n) - use SBBSTs to ensure this height
81
What are the types of self-balancing search trees?
2-3 trees AVL trees (SBBST) red-black trees (builds upon 2-3 trees to represent 3-node as BT w/ red link) B-trees (generalisation of 2-3 trees for more keys per node)
82
What are some self-balancing trees that are not height-balanced?
splay trees + treaps - helpful for if you wanted tree to balance for common letter patterns rather than any letter sequence
83
What are the cons of 2-3 trees?
inconvenient to implement -diff. node types - multiple compares per node - moving back up tree to split 4-nodes
84
What is a 2-3 tree?
2 node = 1 key, 2 children 3 node = 2 keys, 3 children
85
What is different about a 2-3 tree insert()?
to maintain perfect balance... - a 2-node is transformed into a 3-node OR - a 3-node is split into 3 2-nodes OR - non-root 3-node w/ a 3/2-node parent transforms into a TEMP 4-node (split later)
86
What is a graph?
set of nodes + edges that capture relationships edges may be directed (arrows), undirected (no arrows so bidirectional) + weighted (values)
87
What is a simple graph?
graph w/ no self-loops
88
What is a hypergraph?
nodes + hyperedges to capture k-wise relationships ( k > 2) rather than usual pairwise
89
What is a subgraph?
subset of nodes + edges of a graph
90
What is a spanning subgraph/tree?
spanning subgraph = formed from all nodes of the graph but only some edges spanning tree = spanning subgraph + tree minimal spanning tree = spanning tree w/ min. total sum of edge weights
91
What is a complementary graph?
same set of nodes + contains all the edges NOT in the other graph
92
Explain graph density:
measure from 0-1 that indicates how heavily connected its nodes are
93
What methods would a graph ADT inc.?
addNode (Node n) removeNode (Node n) addEdge (Node n, Node m) removeEdge (Node n, Node m) isAdjacent (Node n, Node m) getNeighbours (Node n) where the graph itself may be a list/adjacency matrix
94
What are the complexities for graph list implementation vs adjacency matrix implementation?
N = nodes, E = edges mem. = O(N+E) vs O(N^2) TC = O(1) for addNode + O(N) for rest TC = O(N^2) for add/remove + O(1) for rest
95
What is an adjacency matrix?
2D array that can represent graphs 1 indicates edge between nodes, 0 doesn't symmetric for undir. so only need to modify 2 cells per edge only modify 1 cell for dir. graphs
96
How does depth-first graph traversal work?
using stack to push once visited + pop to backtrack start from source then visit current node's neighbours till hit an alr. visited backtrack + continue
97
How does breadth-first graph traversal work?
using queue to enqueue then dequeue once fully visited start at source + visit all nodes at distance 1, then 2, etc. can form a BFS tree out of list of traversed edges to find shortest paths
98
Define path:
sequence of non-repeating nodes from node x to y
99
What are the 3 shortest path problems?
given a graph G = (V, E) + source node s… shortest distances (to s) = distance from all nodes v ∈ V to s shortest paths (to s) = for each node v ∈ V the shortest path from v to s pathfinding (s to target) = shortest path from s to t
100
How to find shortest path for unweighted graphs?
path that inc. least no. of edges - find using BFS
101
What are the algorithms for solving shortest path problems?
Dijkstra's - extended to A*, Bellman-Ford + Floyd-Warshall
102
How do you find shortest distance using Dijkstra's?
initialise visited[] to not + distToSource[] to 0 for source + infinity for rest visit closest nodes to you then for all its neighbours if path is shorter than update it do till all nodes = visited
103
How do you find shortest path using Dijkstra's?
visited[] = not, distToSource[] = 0/infinity, prevNode[] = null do the same as shortest distance +record prevNode accordingly
104
How do you compute pathfinding using Dijkstra's?
set up arrays as per visit closest unvisited nodes + return path + distance once you find target path = prevNode[node], prevNode[prevNode[node], etc.
105
What is the TC of Dijkstra's algorithm?
worst case = O(n^2) can improve to O(m + n log n) if using better data structs. like priority queues/SBBST
106
How does the A* algorithm work?
need extra info (heuristic) on distances for some nodes to source so can use heuristics to guide the search e.g. if weights represent actual travel distances then can approximate to straight line dist.
107
How does Bellman-Ford algorithm work?
works on edges w/ negative weights (but not cycles) compute (over)estimations of the distances from source → all other nodes which you iteratively improve on improves all edges every iteration rather than picking 1 each time like Dijkstra's
108
What is the worst case TC of the Bellman-Ford algorithm?
O(n ⋅ m)
109
What is an algorithmic paradigm?
generic framework that underlies a class of algorithms e.g. divide + conquer, greedy + recursion
110
What are some problems solved by greedy algorithms?
travelling salesman, shortest path + vertex 3-colouring
111
How does a greedy solution to the shortest path problem work?
same as Dijkstra's start at source + visit closest unvisited till all are visited
112
How does a greedy solution to the travelling salesman problem work?
repeatedly visit nearest node to current node + once all visited go back to origin !shortest route but correct
113
How does a greedy solution to vertex 3-colouring work?
for i = 1 to V, choose for node v[i] the smallest colour that no neighbour has chosen yet (CANNOT solve vertex 2-colouring though)
114
What are the greedy algorithms regarding minimum-weight spanning trees?
Kruskal's + Prim's algorithms MST = subset of a graph such that all the vertices are connected using minimum possible number of edges - where this is w/ min. weight
115
How does Kruskal's algorithm work?
start w/ tree T = ∅, sort edges from lowest - highest weight for i = 1 to [E], if edge e[i] doesn’t form a cycle when added to T then add it to T, or also T = T ∪ {e[i]} [iterate through edges in order of inc. weights + add edge if it doesn't create a cycle]
116
Kruskal TC:
takes O(|E|log|V|) worst case TC
117
How does Prim's algorithm work?
start w/ V[t] = v[1] and E[r] = ∅ while V[t] ≠ V, find min. weight edge {u, w} going from node in T to node outside T, add {u, w} to E[t], w/o loss of generality, u ∈ T, then add w to V[t] [start at a node + repeatedly add incident outgoing edge of min. weight]
118
Prim's worst case TC:
O(|E| + |V| log|V|) - best implementation O(|V|^2) or O(|E| log|V|) - easier implementation
119
Define polynomial + super-polynomial runtime:
p = bounded from above n^c where c = constant s-p = runtime often requires more time than available (even for small inputs)
120
Define tractable + intractable problems:
problems admitting reasonable (polynomial) algorithms problems not admitting reasonable time algorithms cannot be fixed by hardware improvements as the less efficient the algorithm, the less a faster computer can improve the size of a problem that can be solved in fixed time
121
What is the importance of P=NP or P!=NP?
proving equal will revolutionise AI, allow decryption of everything + help logistics and delivery proving ! shows some questions have no shortcut, keeps info. obscure + helps understand more about computation
122
What is a decision problem? What is an optimisation problem?
representing a problem so that the answer is yes/no which means that the problem is decidable (solvable) represented as - abstract problem X is a binary relation mapping problem instances I to problem solutions S requires some value to be minimised/maximised - but easily convertible to decision
123
What are P and NP problems?
P = decision problem X ∈ P if there exists an algorithm that decides any instance of X in polynomial time NP = set of all decision problems solvable in a polynomial time w/ a non-deterministic (brute-force style) algorithm
124
What is a certifier?
algorithm that takes string instance of the problem + string certificate that ensures instance ∈ problem returns false where above is not true + true for at least 1 certificate if above is true has to run in polynomial time as problems in NP class should have polynomial-time verifiability
125
What is an NP-complete problem + what is an NP-hard problem?
NP-complete = a decision problem B ∈ NP (it is in NP), and A ≤P B for every A ∈ NP (for all problems A in NP, A reduces to B in polynomial time) harder than any NP problem e.g. TSP, SAT, 3-SAT, Hamiltonian cycle, clique, knapsack, 3-colouring NP-hard = A ≤P B for every A ∈ NP (for all problems A in NP, A reduces to B in polynomial time) need not be in NP or even decidable at least as hard as any NP-complete problem
126
What are reductions?
method for solving a problem using another - shows that a problem is as hard as another X[1] is **polynomial time reducible** to a problem X[2] - 𝑋[1] ≤𝑃 𝑋[2] if there is a function f, computable in polynomial time, that takes any input x to X[1] + transforms it to an input f(x) of X[2] so the solution to X[2] on f(x) = the solution to X[1] on x
127
What does it mean if a problem y is polynomial time reducible by x?
y is no harder than x if x has an efficient solution so will y if x ∈ P, implies y ∈ P
128
How to show a problem is NP-complete?
can satisfy all rules of the definition OR show the problem is in NP + polynomial-time reducible by a known NP-complete problem
129
What's special about NP-complete problems?
a lot have similar polynomial time solvable problems, e.g. minimum spanning tree + TSP, 2-SAT + 3-SAT, shortest path + longest path, etc. these special cases can be used to help solve them (like with SAT + 2-SAT)
130
What ways are there to deal w/ NP problems?
heuristics, approximations, use LONGG algorithm anyways, solve diff. problem