algorithms Flashcards

U (101 cards)

1
Q

Euclids equality

A

gcd(m,n) = gcd(n, m mod n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

euclid code

A

Euclid(m.n)
while n != 0 do
r <– m mod n
m <- n
n <– r
return m

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is The Sieve of Eratosthenes
how many numbers need to check up to
Prove it

A

The Sieve of Eratosthenes gives a way to list all primes up
to some integer n

We only need to check all numbers up to √n
* Assume there is another composite number x ≤ n with a divisor a>√n
* Then b = x/a <√ n would also be a number that divides x
* But since we have gone through all numbers smaller √n,
we must have already eliminated all multiples of (factors of)
b including x =⇒ contradiction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sieve of Eratosthenes algorithm

A

Eratosthenes(n)
//input positive integer n>1
//output boolean Array A of length n in which A[i] true if i is prime

for p<–2 to n do
A[n] <– true
for p<– 2 to √n do
if A[p] is true then
j <–p * p
while j <= n do
A[j] <– false
j <– j + p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

binary search algorithm

and time complexity

A

binary_search(a,x)
first <– 0
last <— n-1
while first <= last do
mid = (first + last) /2
if x ==a[mid] then
return true
if x < a[mid] then
last = mid -1
else
first = mid + 1
return false

O(log n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

rule to show O(f) (function grows no faster than f(n))

rule for Ω(f) (function grows at least as fast as f(n))

rule for Θ(f) (function grows like f(n))

rule for o(f) (f(n) grows faster)

rule for ω(f) ( g(n) grows faster)

A

g(n) ≤ c · f(n) big O

g(n) ≥ c · f(n) big Omega

g ∈ O(f) ∧ g ∈ Ω(f)

lim
n→∞ g(n)/f(n)= 0

f(n)/g(n)= 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

hierarchy of growth

A
  • 1 ≺ log(n) ≺ log2(n) ≺ log3(n) ≺cuberoot√ n
    ≺√n
    *√n ≺ n ≺ n · log(n) ≺ n ·√n ≺ n^2 ≺ n^3 ≺ 2^n ≺ 3^n ≺ n!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What do do with Sum of Functions growth rate

What to do with product of functions growth rate

A

pick the Highest growth rate

multiply the individual growth rates 4 · n · n ∈ O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

prove loga(n) = logb(n)/logb(a)

A

loga(n) = x
mean a^x = n
logb(a^x) = logb(n)
xlogb(a) = logb(n)
x = logb(n)/logb(a)
loga(n) = logb(n)/logb(a)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

L’Hopital’s Rule

A

f(n)/g(n) when these are derivations of f(n) and g(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selection sort

A

selection_sort(A)
for i <– 0 to n-2 do
min = i
for j = i + 1 to n-1 do
if A[j] < A[min] then
min <–j
swap A[i] and A[min]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

bubble sort

complexity

A

Bubble_sort(A)
swap <– True
While swap
swap <– false
for i = 0 to n-2
if A[i] > A[i+1] then
swap A[i] and A[i +1]
swap <- true

O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Merge Sort Idea

A

divide array of numbers into two halves until subarray only has one element

Merge sorted subarrays into larger sorted subarrays until only one sorted array left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Merge Sort Algorithm

A

Algorithm Merge_Sort(A) // Input: Array A of n integers
if n>1 then
Copy first half of array A to new array B
Copy second half of array A to new array C
Merge_Sort(B)
Merge_Sort(C)
Merge(B,C,A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Merge algorithm

A

Merge(C,B,A) // A for storing results
// input sorted arrays B and C of length p and q
// output : sorted combined array a of length p +q

i<– 0; j <— 0; k<— 0
while i <p and j < q do
if B[i] <= C[j] then
A[k] <– B[i]
i <– i + 1
else
A[k] <– C[j]
j <– j + 1
k <– k + 1
Copy elemtns from remaining sequence to end of array A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how efficient is merge sort

A

O(n * log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Quick sort idea

A

-Pick any element in the array for the pivot element
-Partition elements into left part(smaller than pivot) and right part(elemetns larger than pivot)
-Repeat same procedure for both parts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Quick Sort Algorithm

A

Quick_Sort(A, l, r) // Array a of n integers
if l < r then
p <–Partition(A,l,r)
Quick_Sort(A,l,p-1)
Quick_Sort(A,p+1,r)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Partition idea

A

input Array A and index p of pivot element
Output Array A with rearranged elements
Scan array from left and from right using pointers i and j
Whenever A[i] > A[p] and A[j] < A[p], swap A[i] and A[j]
When pointers meet somewhere in middle,stop scanning
swap pivot element into the middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Partition Code

A

Partition(A,l,r) // input array, partition bounds, pivot
i<– l - 1
j <–r
p<— r
while i<j do
do i <– i + 1 while (i<j and A[i] < A[p]);
do j <— j -1 while(j> i and A[j] > A[p]);
if i>= j then
swap(i,p)
else
swap(i,j)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Master theorm formula

how to compute d

what are three cases

how to determine c

A

T(n) = a * T(n/b) + f(n)

d = logb(a)

Case 1 Θ(nd) :
f(n) ∈ O(n^c) for some constant c < d

Case 2:Θ(nd· logk+1(n))
f(n) ∈ Θ(n^d· log^k(n)) for a k ≥ 0

Case 3:Θ(f(n))
f(n) ∈ Ω(n^c) for some constant c > d, and given
that a · f(n/b) ≤ k · f(n) for some constant k < 1 and large
enough n.

power of n for F(n) is = c, no power means 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what does rank k mean

A

An element of a totally ordered set has rank k if exactly k − 1
elements are smaller than this element

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Quick sort best case
prove

A

requires n log(n) + O(n) comparisons
recurrence relation : qs recursively calls problems half the size.
Tqs(n) = Tqs((n-1)/2) + Tqs((n-1)/2) + n
Tqs(1) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Quick Sort worst case

A

requires O(n^2)
Recurrence relation
Tqs(n) = Tqs(n-1) + n-1 for n>=2
Tqs(1) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Tree Terminology: inner nodes depth of a nofe height of a node Number of leavews of binary tree #comparisons in worse case
all nodes with children length of the path to its root length of longest downard path to a leaf from that node n! h(T)
26
Let λ(T) be the number of leaves of binary tree T. Let h(T) be the height of T. Thenλ(T) ≤ 2^h(T) Prove theorm
Base case h(T) = 0 and , λ(T) = 1 so base case true Hypothesis : assume that , λ(T) <= 2^h(T) for height h(T) = k-1 Step: show it true for h(T) = k Consider any tree T of height k. If the last level of T was removed, the resulting tree T' would have height k-1, which allows us to apply hypothesis. λ(T') <= 2^H(T') = 2^k-1 By restoring last level, the number of leaves are at most doubled compared to T' as every leaf of T' has at most two children in T. This implies , λ(T) <= 2* λ(T') <= 2 * 2^h(T') = 2 * 2^k-1 = 2^k
27
Quick select idea
searches for element of rank k in every round, we pick a pivot element x, partititon the array and determine the rank p of x If k = p, then x is the eleement we are looking for. if k< p then continue search only in the left part if k > p, continue search only in right part
28
Quick select algorithm(Partition is same as quick sort)
Quick_Select(A,l,r,k) // input array A, bounds, rank k p <-- Partiton(A,l,r) // p is split position(pivot) if p == k-1 then return A[p] if p > k-1 then return Quick_Select(A,l,p-1) else return Quick_Select(A,p+1,r) O(n^2)
29
Incidence
an edge e is incident on a vertex v if v is an endpoint of e.
30
Multiple edges Loop simple graph
edges that are incident to the same two vertices edge that connects a vertex with iteself graph does not contain multiple edges and loops
31
degree if vertex
number of edges that are incidenet on v
32
HandShaking lemma and proof
The sum of all vertex degrees is equal to two times the number of edges Since every edge has 2 endpoints, the sum must be equal to two times the number of edges
33
The number of vertices in a graph with an uneven degree is even
34
walk trail path circuit cycle
sequence of edges for which there is a sequence of verticies all adjacent trail is walk which edges are used once Path is walk which verticies are used once circuit is a closed trail cycle is a circuit in which only the first and last vertex are equal
35
Tree properties
cycle-free and connected G has n-1 edges where n is number of verticies
36
Forest
undirected, cycle free graph that has multiple trees(not connected)
37
d-ary tree
tree which every node has at most d children
38
Preorder traversal
-vertex, left subtree, right subtree in the right subtree it goes to all left first
39
Inorder Traversal
-left subtree, vertex, right subtree look at eg
40
PostOrder traversal
left subtree , right subtree, vertex
41
Adjacency List and space complexity?
list of veticies to which it is adjacenet to requires n list each list represented twice for each endpoint 2 * m nodes storing data and reference requires 2 space n + 2 * 2 *m = O(n + m)
42
Adjacency matrix O(n^2)
matrix which element at row i and column j is 1 if there is an edge from vertex vi to vertex vj, otherwise 0
43
arbitrary tree implementation Tree of arbitrary degree first child -next sibling representation
Each node have a list or array of references pointing to the children Every node has two pointers left pointers points to the first child of a vertex(which will be left) right pointer points to its next siblings(vertex on same lvl)
44
Sorted Arrays complexity for is_memeber -insert -delete
ismemeber O(log(n)) // binary search insert O(log(n)) // also binary search when inserting k, we must shift all subsequent elements to the right delete(key k) O(log(n)) also binary search shift all subsequent elemtns the the left
45
Accounting method cost of inserting is 1 cost of one copy operation is 1
get fixed income which i have to guess for every insert operation(must be lowest i think). Prove income is sufficient to pay for a sequence of n insert operations. derive s * n is the running time
46
search for key x in search tree alg Worse case
search(k) if k == key then return data If k < key and left != null then return left.search(k) if k > key and right != null then return right.search(k) return null O(h(T))
47
insert a key x alg Worse case
insert(k,d) // key k and data d if k == key then data <-- d else if k < key then if left != null then left.insert(k,d) else newnode <-- create new treenode T with k and d left <-- refrence to T else if right != null then right.insert(k,d) else NewNode <--- create new TreeNode T with k and d right <-- refrence to T O(H(T))
48
how to delete key x
How do we delete a key x? 1. Search for this key x 2. If x has no children (i.e. is a leaf), simply remove x 3. If x has exactly one child, then remove x by connecting x’s child to x’s parent 4. If x has two children, then replace x by the smallest vertex s in the right subtree of x and attach the right subtree of s to the parent of s. O(H(T))
49
Modulo method Multiplication method
h(k) = k mod n h(k) = ⌊n · (k · a − ⌊k · a⌋)⌋ The square brackets gives the integer value a = 0.61803????
50
Ideal hash function theorm
let p(k) be probability for the occurence of key k in U. ideal hash function h is ideal for U if sum p(k) = 1/n
51
Load Factor Proning Cache Locality
α =m/n where this is number of keys divided by size of hash table Checking if a hash table position is free Cache Locality allows faster data access
52
Chaining collision resolution
All keys that are mapped to the same position are stored in a sorted linked list Search for 87 Access array at index 4(one probing) Check first element stored 11!= 87(one probing) Check second element 87=87(one probing) so three probings
53
how many probings needed on average for succ for uncc
unsuccessful search Mapping is one probing Every list element must be probed list probings have average length of α avg cost is 1 + α Successful search Mapping is one probing on average we probe half the list elements to find right one so cost is 1 +α/2 worse case for all operations is O(m) // number of elements stored
54
Linear probing
keys stored in array itself incase of collision, insert tries to find another posisition - uses function h(k, i) = (h(k) + i) mod n h(k) is modulo hash function
55
Search for key k using linear probing
Map k to index in hash table using h(k, 0) = h(k) * If position h(k) is free, stop (element not in hash table) * If h(k) contains k, stop (search successful) * If h(k) contains different key, repeat for h(k) + 1 mod n * If h(k) + 1 mod n also contains different key, try h(k) + 2 mod n, and so on
56
Primary Clustering Secondary Clustering
primary clustering for every pair keys k and k` theres i and i` such that h(k,i+j) = h(k`,i` + j) h(33, i) sequence: 3, 4, 5, 6, 7, 8, 9, 0, 1, 2 h(54, i) sequence: 4, 5, 6, 7, 8, 9, 0, 1, 2, 3 primary clustering between h(33,1 + j) = h(54,j) Secondary clustering is for every pairs of keys k, k` with h(k) = h(k`) the probe sequences are identical h(54, i) sequence: 4, 5, 6, 7, 8, 9, 0, 1, 2, 3 h(94, i) sequence: 4, 5, 6, 7, 8, 9, 0, 1, 2, 3 Secondary clusering for h(54,i) = h(94,i)
57
Quadratic probing
uses different hash function h(k, i) = (h(k) + (−1)^i+1·⌊i/2⌋^2) mod n
58
Double Hashing
uses second hash function h' h(k, i) = (h(k) + i · h′(k)) mod n
59
Euler Circuit Eurler Trail
Circuit that contains every edge of g exactly once Trail that contains every edge of g once ( semi- Eulerinan graph)
60
Connected graph is Eulerian if and only if every vertex degree is even Proof
Part 1 eulerain then every vertex degree is even Let T be an Euler circuit in G and v an arbitrary vertex that occurs k times in T Everytime v is visited, there must be an ingoing edge and an outgoing edge. Since every edge is used exactly once, degree of v is 2 * k and thus even Part2: if every vertex degree is even then G is eulerian Assume T is a trail of max length in G but not an Euler circuit According to the lemma, T must be a ciruit Assume That there are edges not contained in T Since G is connected, there must be at least one such edge e that is incident on a vertex vi in T Then T` = (e, ei ,.. ek.., ei-1) is a trial that is longer than T.. contradiction
61
A connected graph is semi-Eulerian (i.e. has an Euler trail) if and only if exactly 0 or 2 vertices have odd degree.
62
(A maximal trail is a trail that you cannot extend by adding more edges at the end.
63
Hierholzer’s Algorithm idea
1 Start at any vertex and create a trail K of maximum length by following consececutive unvisisted edges. 2. If K contains all edges, return K 3. Otherwise start from any vertex of K that is incident to an unvisited edge and form a new circuit K′ by following unvisited edges 4. Combine K and K′to a new circuit K′′ 5. K ← K′′ 6. Go to Step 2
64
Hierholzers pseudoode
//Input graph G=(V,E) which all vertices have an even degree Output: Euler Circuit 4. Select some vertex v0 in V 5. Extend v0 to maximal trail K = (Vk,Ek) by adding unvisited incident edges to trail's end vertices 6. while Ek != E do 7. Select vertex v in Vk that is incident to an edge e not in Ek extend v to maximal trail K' using only unvisited edges from E\Ek K"" <-- combination of K and K' K<--K"" return K
65
Depth-first Search
Start from vertex Explore branch as far as possible then bactrack, explore next branch and so on
66
Breadth-first Search
Start from vertex visit every vertex at depth 1 then every vertex at depth 2 and so on
67
shortest path
source vertex s to destination vertex t in G if its weight is as small as the weight of any other (s-t)-path.
68
Let P be a shortest (s, t)-path. Then every connected subgraph of P is a shortest path.
v0 ̸= vk . Then every time vk was visited before the last visit, we must have used two edges, one to reach vk and one to leave it. For the last visit we use one edge to go to vk , and according to the assumption, there is no further edge incident to vk (since all edges incident to vk are in T). Hence, vk is incident to an odd number of edges, 6 i.e. has an odd degree, which is a contradiction to fact that all graph vertices have an even degree
69
Single-pair problem Single SOurce All-Pairs
find shortest path from s to t find shortest paths from s to all other vertices in the graph Find shortest path between every pair of vertices
70
Greedy algorithm
at every step make the best possible move
71
dijkstra algorithm
solves the single-source shortest path problem from some vertex s using greedy strat -start from s -keep data structure U with all vertices not yet added to the tree -In every round , add new vertex v in U to tree with minimal distance to s, i.e. min{dist(s,v)} Update info for neightbours of v in data structure(distance and predecessor)
72
Dikstra code
Dijkstra(G,s) // G=(V,E) with positive weights, vertex s 1: dist(v) ← ∞ ∀v ∈ V; dist(s) ← 0 2: pred(v) ← NULL ∀v ∈ V; pred(s) ← s 3. U = createPriorityQueue(V) 4.while not isEmpty(U) do v <-- extractMin(U) for all u ∈ N(v) ∩ U do // look at all neightbors u of v if dist(v) + w(v, u) < dist(u) then decreaseKey(u, dist(v) + w(v, u))) 9: pred(u) ← v 10.return (pred, dist) // shortest-path tree For managing U and dist, we need a so-called priority queue offering * extractMin * decreaseKey * insert * isEmpty
73
Spanning subgraph SPanning tree Minimum spanning tree
if a subgraph of G that contain all vertices of G spanning subgraph and a tree Spanning tree of minimum weight
74
Kruskal's Algorithm to find MST IDea
Start with an empty edge set E in every step, add next edge of minimal weight from G to E whilst not introducing cycles Stop once a connected graph is reached
75
Kruskal's code
Kruskal(G) // G = (V,E) undirected and connected 1: T ← ∅ // ∅ is the empty set 2: E′ ← E 3: Sort edges in E′in ascending order 4: while E′ ̸= ∅ do Pick an edge e in E' with minimum weight if adding e to T does not form cycle then Add e to T remove e from E' 9. return (V,T)
76
union find data strcuture operations
makeset(x) ... creates a one-element set {x}; * find(x) ... returns a subset containing x; * union(x, y) ... constructs the union of the disjoint subsets Sx and Sy containing x and y, respectively, and replaces Sx and Sy by this union
77
Quick-find implementation
The smallest element from each subset is used as its represetntative Array used to find the represetntative of an element Linked list used to store subset
78
how does union find data structure help with kruskal's alg
store each vertex ID x in a seperate set using makeset(x) Each time an edge(i,j) is considered to be added to the MST it must be checked if it create a cycle. if find(i) = find(j) if edge can be added, perfrom union(i,j)
79
Complexity of kruskals algo is O(mlog(n)) prove?
Every find operations takes O(1) and at most are 2*m of then A sequence of n union operations takes O(nlog(n)) so O(m*log(n))
80
Prims Algorithm idea
select arbitrary vertex v0 to start the tree While there are vertices not in the tree do select an edge of min weight among all edges between a vertex in the tree and an outside vertex not in the tree Add the selected edge and its end vertex to the tree return resulting tree
81
Prims lil dih code
ALGORITHM Prim(G) // G = (V, E) undirected and connected 1: VT ← {v0} // 2: ET ← ∅ // 3: for i ← 1 to |V| − 1 do 4: Find a minimal weight edge e∗ = (v∗, u∗) among all edges (v, u) such that v is in VT and u is in V \ VT 5: VT ← VT ∪ {u∗} 6: ET ← ET ∪ {e∗} 7: return (V, ET ) // ET is the set of edges of the MST
82
Dynamic programming
* It is used for problems in which subproblems are not independent, i.e. subproblems share subsubproblems
83
Fibonacci idea of dynamic programming
compute each F(i) exactly one time and store it in an array Next time we need F(i) we look it up in array
84
Coin Row problem
input// row of n coins, whose values are some positive integer. output// max amount of money subject to the constraint that no two coins which are adjacent can be picked up
85
Coin row relation
F(0) = 0 F(1) = c1 F(n) = max{cn + F(n − 2), F(n − 1)} for n > 1
86
Coin row alg
//Input array C of coin values F[0] <-- 0 F[1] <-- C[1] for i <--2 to n do F[i] <-- Max(C[i] + F[i-2],F[-1]) return F[n]
87
Robot coin collection problem
coins placed on n x m board. robot in upper left cell of board and needs to collect as many coins as possible to the bottom right only moving one cell right or down each step.
88
Idea to solve it
let (i,j) denote the cell in ith row and jth column(starts from 1) Let F(i,j) be the largest number of coins the robot can collect and bring to cell(i,j) F(0,j) = 0 F(i,0) = 0 F(i, j) = max{F(i − 1, j), F(i, j − 1)} + cij where cij = 1 if coin in cell (i,j) and 0 otherwise
89
Alphabet word ϵ ?? proper meaning
finite set of symbols Sequence of symbols from an alphabet empty word- length 0 if subword,prefix,suffix is proper if s` != s
90
String Search alg
loop through all indicies of i of the text t for each index i, check whether ti, ti+1.. matches search string, if do retun true Simple_Search(t,s)// text t, string s n <-- length(t); m<-- length(s) i <--0; j<--0 while i<= n-m do while t[i+j] == s[j] do j++ if j==m then return i i <-- i + 1 j <-- 0 return -1
91
Border proper border actual border
s is a word, s` is a border of s if s` is a prefix and a suffix of s. if it is a proper prefix and proper suffix of s if it is the longest proper border of s
92
KMP algorithm use borders
stores the actual border length of every prefix of the search string s in an array
93
KMP alg idea
loop through all indices i of the text t for each i check if Ti matches search string if whole search string match return true if first j characters of search string are matched, continue search at i= i+j(skip indices in loop) if ti.. ti+1 .. has an actual border of size b, assume that b characters are already matched
94
Kmp code
kmp(t,s) 1: n ← length(t); m ← length(s) 2: Create array border of size m + 1 3: compute borders(border, m, s) 4: i ← 0; j ← 0 5: while i ≤ n − m do 6: while t[i + j] == s[j] do 7: j + + 8: if j == m then 9: return i 10: i ← i + j − border[j] // previously i ← i + 1 11: j ← max{0, border[j]} // previously j ← 0 12: return −1
95
KMP algorithm requires at most 2*n-m+1 compariosons if the border array is given. Proof>
Unsuccessful comparisons: i is increasded by at least 1 hence there will be at most n -m + 1 such comparisons Successful comparisons Consider sum i +j after a succ comparisson, i+j grows by 1 because as j is incremeneted. i+j < (n-m)+m= n so at most n successful performed
96
data compression
encoding info which uses fewer bits than og representation lossless if og representation can be restored from compressed version. Lossy otherwise
97
For every natural number n, and every lossless compression method there is at least one text of length n(consisting of n bits) which cannot be compressed
Assume that every sequenece of n bits is a text exactly 2^n texts of length n how many text of length at most n-1 are there? sum from i=1 to n-1 ( 2^i) therefore at least one text is mapped to text of length n
98
Prefix code
code which no code word is the prefix of another code {101, 001, 10, 1100} prefix codes?
99
Huffman encoding
prefix code that maps letters to bit strings uses binary tree T leaves represent letters code word of a letter is given by the path from root to its leaf at every node one bit is added to the code word, a 0 for left, a one for right
100
Length of letters code word average code word length
length of its path in T Sum of all (Probability of word * length of word)
101
Alg
every letter represented by single vertex the p of the letter is stored with it the vertices form the trees of the initial forest While there is more than one tree , do Assume T1 and T2 are the trees of smallest p replace T1 and T2 by a new tree T1,2 consisting of a new vertex and T1 and T2 as its subtrees