Algorithms and Data Structures Flashcards

1
Q

What is an algorithm?

A

In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(s) and produces the desired output.

https://www.programiz.com/dsa/algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the qualities of a good algorithm?

A
  • Input and output should be defined precisely.
  • Each step in the algorithm should be clear and unambiguous.
  • Algorithms should be most effective among many different ways to solve a problem.
  • An algorithm shouldn’t include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.

https://www.programiz.com/dsa/algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are data structures?

A

Data structure is a storage that is used to store and organize data. It is a way of arranging data on a computer so that it can be accessed and updated efficiently.

Depending on your requirement and project, it is important to choose the right data structure for your project.

https://www.programiz.com/dsa/data-structure-types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the types of data structures?

A

Basically, data structures are divided into two categories:

  • Linear data structure
  • Non-linear data structure

https://www.programiz.com/dsa/data-structure-types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Asymptotic Analysis?

A

The study of change in performance of the algorithm with the change in the order of the input size is defined as asymptotic analysis.

https://www.programiz.com/dsa/asymptotic-notations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Asymptotic Notation?

A

Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.

https://www.programiz.com/dsa/asymptotic-notations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Big-O notation?

A

Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.

O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

https://www.programiz.com/dsa/asymptotic-notations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Big-Omega notation?

A

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm.

Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

https://www.programiz.com/dsa/asymptotic-notations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Big-Theta notation?

A

Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

https://www.programiz.com/dsa/asymptotic-notations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is little-o notation?

A

Big-Ο is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used to describe an upper bound that cannot be tight.

Definition: Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).

Thus, little o() means loose upper-bound of f(n). Little o is a rough estimate of the maximum order of growth whereas Big-Ο may be the actual order of growth.
In mathematical relation, f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞

https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is little-omega notation?

A

Definition : Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.

f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little omega, it is true for 0<=c*g(n)<f(n).

The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-Ο and Little o except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of the order of the growth whereas Big Omega (Ω) may represent exact order of growth. We use ω notation to denote a lower bound that is not asymptotically tight. And, f(n) ∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).

In mathematical relation,

if f(n) ∈ ω(g(n)) then,

lim f(n)/g(n) = ∞

n→∞

https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Master Theorem?

A

The master method is a formula for solving recurrence relations of the form:

T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed
to have the same size.
f(n) = cost of the work done outside the recursive call,
which includes the cost of dividing the problem and
cost of merging the solutions

Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.

If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time complexity of a recursive relation is given by

T(n) = aT(n/b) + f(n)

where, T(n) has the following asymptotic bounds:

1. If f(n) = O(nlogb a-ϵ), then T(n) = Θ(nlogb a).

2. If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a * log n).

3. If f(n) = Ω(nlogb a+ϵ), then T(n) = Θ(f(n)).

ϵ > 0 is a constant.
Each of the above conditions can be interpreted as:

  1. If the cost of solving the sub-problems at each level increases by a certain factor, the value of f(n) will become polynomially smaller than nlogb a. Thus, the time complexity is oppressed by the cost of the last level ie. nlogb a
  2. If the cost of solving the sub-problem at each level is nearly equal, then the value of f(n) will be nlogb a. Thus, the time complexity will be f(n) times the total number of levels ie. nlogb a * log n
  3. If the cost of solving the subproblems at each level decreases by a certain factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the time complexity is oppressed by the cost of f(n).

The master theorem cannot be used if:

  • T(n) is not monotone. eg. T(n) = sin n
  • f(n) is not a polynomial. eg. f(n) = 2n
  • a is not a constant. eg. a = 2n
  • a < 1

https://www.programiz.com/dsa/master-theorem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a Divide and Conquer algorithm?

A

A divide and conquer algorithm is a strategy of solving a large problem by

  • breaking the problem into smaller sub-problems
  • solving the sub-problems, and
  • combining them to get the desired output.

To use the divide and conquer algorithm, recursion is used.

https://www.programiz.com/dsa/divide-and-conquer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the time complexity of a Divide and Conquer algorithm?

A

The complexity of the divide and conquer algorithm is calculated using the master theorem.

https://www.programiz.com/dsa/divide-and-conquer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When do you use divide and conquer vs dynamic programming?

A

The divide and conquer approach divides a problem into smaller subproblems; these subproblems are further solved recursively. The result of each subproblem is not stored for future reference, whereas, in a dynamic approach, the result of each subproblem is stored for future reference.

Use the divide and conquer approach when the same subproblem is not solved multiple times. Use the dynamic approach when the result of a subproblem is to be used multiple times in the future.

https://www.programiz.com/dsa/divide-and-conquer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the advantages of the divide and conquer approach?

A
  • The complexity for the multiplication of two matrices using the naive method is O(n3), whereas using the divide and conquer approach (i.e. Strassen’s matrix multiplication) is O(n2.8074). This approach also simplifies other problems, such as the Tower of Hanoi.
  • This approach is suitable for multiprocessing systems.
  • It makes efficient use of memory caches.

https://www.programiz.com/dsa/divide-and-conquer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a stack?

A

A stack is a linear data structure that follows the principle of Last In First Out (LIFO). This means the last element inserted inside the stack is removed first.

You can think of the stack data structure as the pile of plates on top of another.

https://www.programiz.com/dsa/stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is LIFO?

A

Last In First Out. In programming terms, putting an item on top of the stack is called push and removing an item is called pop.

https://www.programiz.com/dsa/stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the basic operations of a stack?

A

There are some basic operations that allow us to perform different actions on a stack.

Push: Add an element to the top of a stack
Pop: Remove an element from the top of a stack
IsEmpty: Check if the stack is empty
IsFull: Check if the stack is full
Peek: Get the value of the top element without removing it

https://www.programiz.com/dsa/stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the time complexity of a stack?

A

For the array-based implementation of a stack, the push and pop operations take constant time, i.e. O(1).

https://www.programiz.com/dsa/stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are some applications of the stack data structure?

A

To reverse a word - Put all the letters in a stack and pop them out. Because of the LIFO order of stack, you will get the letters in reverse order.

In compilers - Compilers use the stack to calculate the value of expressions like 2 + 4 / 5 * (7 - 9) by converting the expression to prefix or postfix form.

In browsers - The back button in a browser saves all the URLs you have visited previously in a stack. Each time you visit a new page, it is added on top of the stack. When you press the back button, the current URL is removed from the stack, and the previous URL is accessed.

https://www.programiz.com/dsa/stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a queue?

A

A queue is a useful data structure in programming. It is similar to the ticket queue outside a cinema hall, where the first person entering the queue is the first person who gets the ticket.

Queue follows the First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is FIFO?

A

First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.

In programming terms, putting items in the queue is called enqueue, and removing items from the queue is called dequeue.

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the basic operations of a queue?

A

A queue is an object (an abstract data structure - ADT) that allows the following operations:

Enqueue: Add an element to the end of the queue
Dequeue: Remove an element from the front of the queue
IsEmpty: Check if the queue is empty
IsFull: Check if the queue is full
Peek: Get the value of the front of the queue without removing it

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a limitation of a queue?

A

After a bit of enqueuing and dequeuing, the size of the queue has been reduced.

And we can only add indexes 0 and 1 only when the queue is reset (when all the elements have been dequeued).

After REAR reaches the last index, if we can store extra elements in the empty spaces (0 and 1), we can make use of the empty spaces. This is implemented by a modified queue called the circular queue.

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the complexity of a queue?

A

The complexity of enqueue and dequeue operations in a queue using an array is O(1). If you use pop(N) in python code, then the complexity might be O(n) depending on the position of the item to be popped.

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the applications of a queue?

A

CPU scheduling, Disk Scheduling

When data is transferred asynchronously between two processes.The queue is used for synchronization. For example: IO Buffers, pipes, file IO, etc

Handling of interrupts in real-time systems.

Call Center phone systems use Queues to hold people calling them in order.

https://www.programiz.com/dsa/queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are the types of queues?

A

There are four different types of queues:

Simple Queue
Circular Queue
Priority Queue
Double Ended Queue

https://www.programiz.com/dsa/types-of-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is a simple queue?

A

In a simple queue, insertion takes place at the rear and removal occurs at the front. It strictly follows the FIFO (First in First out) rule.

https://www.programiz.com/dsa/types-of-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is a circular queue?

A

In a circular queue, the last element points to the first element making a circular link.

The main advantage of a circular queue over a simple queue is better memory utilization. If the last position is full and the first position is empty, we can insert an element in the first position. This action is not possible in a simple queue.

https://www.programiz.com/dsa/types-of-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is a priority queue?

A

A priority queue is a special type of queue in which each element is associated with a priority and is served according to its priority. If elements with the same priority occur, they are served according to their order in the queue.

Insertion occurs based on the arrival of the values and removal occurs based on priority.

https://www.programiz.com/dsa/types-of-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is a deque?

A

In a double ended queue, insertion and removal of elements can be performed from either from the front or rear. Thus, it does not follow the FIFO (First In First Out) rule.

https://www.programiz.com/dsa/types-of-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is the complexity of a circular queue?

A

The complexity of the enqueue and dequeue operations of a circular queue is O(1) for (array implementations).

https://www.programiz.com/dsa/circular-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the applications of a circular queue?

A

CPU scheduling

Memory management

Traffic Management

https://www.programiz.com/dsa/circular-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What’s the difference between a priority queue and a normal queue?

A

In a queue, the first-in-first-out rule is implemented whereas, in a priority queue, the values are removed on the basis of priority. The element with the highest priority is removed first.

https://www.programiz.com/dsa/priority-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How is a priority queue implemented?

A

Priority queue can be implemented using an array, a linked list, a heap data structure, or a binary search tree. Among these data structures, heap data structure provides an efficient implementation of priority queues.

https://www.programiz.com/dsa/priority-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the complexity of a priority queue?

A

A comparative analysis of different implementations of priority queue is given below.

			Operations
				peek
				insert
				delete
		

		
			Linked List
				O(1)
				O(n)
				O(1)
		

		
			Binary Heap
				O(1)
				O(log n)
				O(log n)
		

		
			Binary Search Tree
				O(1)
				O(log n)
				O(log n)

https://www.programiz.com/dsa/priority-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are the applications of a priority queue?

A

Some of the applications of a priority queue are:

Dijkstra's algorithm
for implementing stack
for load balancing and interrupt handling in an operating system
for data compression in Huffman code

https://www.programiz.com/dsa/priority-queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are the types of deque?

A

Input Restricted Deque
In this deque, input is restricted at a single end but allows deletion at both the ends.

Output Restricted Deque
In this deque, output is restricted at a single end but allows insertion at both the ends.

https://www.programiz.com/dsa/deque

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the complexity of a deque?

A

The time complexity of all the above operations is constant i.e. O(1).

https://www.programiz.com/dsa/deque

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the applications of a deque?

A

In undo operations on software.
To store history in browsers.
For implementing both stacks and queues.

https://www.programiz.com/dsa/deque

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is a linked list?

A

A linked list is a linear data structure that includes a series of connected nodes. Here, each node stores the data and the address of the next node.

The power of a linked list comes from the ability to break the chain and rejoin it.

Doing something similar in an array would have required shifting the positions of all the subsequent elements.

https://www.programiz.com/dsa/linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the complexity of linked lists?

A

Time Complexity

				Worst case
				Average Case
		

		
			Search
				O(n)
				O(n)
		

		
			Insert
				O(1)
				O(1)
		

		
			Deletion
				O(1)
				O(1)

Space Complexity: O(n)

https://www.programiz.com/dsa/linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What are the applications of linked lists?

A

Dynamic memory allocation
Implemented in stack and queue
In undo functionality of softwares
Hash tables, Graphs

https://www.programiz.com/dsa/linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are the basic linked list operations?

A

Here’s a list of basic linked list operations that we will cover in this article.

Traversal - access each element of the linked list
Insertion - adds a new element to the linked list
Deletion - removes the existing elements
Search - find a node in the linked list
Sort - sort the nodes of the linked list

https://www.programiz.com/dsa/linked-list-operations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are the types of linked lists?

A

There are three common types of Linked List.

Singly Linked List
Doubly Linked List
Circular Linked List

https://www.programiz.com/dsa/linked-list-types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is a doubly linked list?

A

A doubly linked list is a type of linked list in which each node consists of 3 components:

*prev - address of the previous node
data - data item
*next - address of next node

https://www.programiz.com/dsa/doubly-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the complexity of a doubly linked list?

A

Doubly Linked List Complexity
Time Complexity
Space Complexity

			Insertion Operation
				O(1) or O(n)
				O(1)
		

		
			Deletion Operation
				O(1)
				O(1)
  1. Complexity of Insertion OperationThe insertion operations that do not require traversal have the time complexity of O(1).
    And, insertion that requires traversal has time complexity of O(n).
    The space complexity is O(1).
  2. Complexity of Deletion OperationAll deletion operations run with time complexity of O(1).
    And, the space complexity is O(1).

https://www.programiz.com/dsa/doubly-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are the applications of a doubly linked list?

A

Redo and undo functionality in software.
Forward and backward navigation in browsers.
For navigation systems where forward and backward navigation is required.

https://www.programiz.com/dsa/doubly-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Compare singly and doubly linked lists.

A

Singly Linked List
Doubly Linked List

			Each node consists of a data value and a pointer to the next node.
				Each node consists of a data value, a pointer to the next node, and a pointer to the previous node.
		

		
			Traversal can occur in one way only (forward direction).
				Traversal can occur in both ways.
		

		
			It requires less space.
				It requires more space because of an extra pointer.
		

		
			It can be implemented on the stack.
				It has multiple usages. It can be implemented on the stack, heap, and binary tree.

https://www.programiz.com/dsa/doubly-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is a circular linked list?

A

A circular linked list is a type of linked list in which the first and the last nodes are also connected to each other to form a circle.

You can have these be single linked or double linked.

https://www.programiz.com/dsa/circular-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What is the complexity of a circular linked list?

A

Circular Linked List Complexity
Time Complexity
Space Complexity

			Insertion Operation
				O(1) or O(n)
				O(1)
		

		
			Deletion Operation
				O(1)
				O(1)
  1. Complexity of Insertion OperationThe insertion operations that do not require traversal have the time complexity of O(1).
    And, an insertion that requires traversal has a time complexity of O(n).
    The space complexity is O(1).
  2. Complexity of Deletion OperationAll deletion operations run with a time complexity of O(1).
    And, the space complexity is O(1).

https://www.programiz.com/dsa/circular-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the applications of circular linked lists?

A

It is used in multiplayer games to give a chance to each player to play the game.
Multiple running applications can be placed in a circular linked list on an operating system. The os keeps on iterating over these applications.

https://www.programiz.com/dsa/circular-linked-list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is a hash table?

A

The Hash table data structure stores elements in key-value pairs where

Key- unique integer that is used for indexing the values
Value - data that are associated with keys.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is a hash function?

A

In a hash table, a new index is processed using the keys. And, the element corresponding to that key is stored in the index. This process is called hashing.

Let k be a key and h(x) be a hash function.

Here, h(k) will give us a new index to store the element linked with k.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What is a hash collision?

A

When the hash function generates the same index for multiple keys, there will be a conflict (what value to be stored in that index). This is called a hash collision.

We can resolve the hash collision using one of the following techniques.

Collision resolution by chaining
Open Addressing: Linear/Quadratic Probing and Double Hashing

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

How do you resolve hash collisions via chaining?

A

In chaining, if a hash function produces the same index for multiple elements, these elements are stored in the same index by using a doubly-linked list.

If j is the slot for multiple elements, it contains a pointer to the head of the list of elements. If no element is present, j contains NIL.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

How do you resolve hash collisions via open addressing?

A

Unlike chaining, open addressing doesn’t store multiple elements into the same slot. Here, each slot is either filled with a single key or left NIL.

There are a few options:

  • linear probing
  • quadratic probing
  • double hashing

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is linear probing?

A

In linear probing, hashing collision is resolved by checking the next slot.

h(k, i) = (h′(k) + i) mod m

where

i = {0, 1, ….}
h'(k) is a new hash function

If a collision occurs at h(k, 0), then h(k, 1) is checked. In this way, the value of i is incremented linearly.

The problem with linear probing is that a cluster of adjacent slots is filled. When inserting a new element, the entire cluster must be traversed. This adds to the time required to perform operations on the hash table.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is quadratic probing?

A

It is for resolving hashing collisions. It works similar to linear probing but the spacing between the slots is increased (greater than one) by using the following relation.

h(k, i) = (h′(k) + c1i + c2i2) mod m

where,

c1 and c2 are positive auxiliary constants,
i = {0, 1, ….}

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What is double hashing?

A

If a collision occurs after applying a hash function h(k), then another hash function is calculated for finding the next slot.

h(k, i) = (h1(k) + ih2(k)) mod m

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What makes a good hash function?

A

A good hash function may not prevent the collisions completely however it can reduce the number of collisions.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What are the different methods to find a good hash function?

A
  • division method
  • multiplication method
  • universal hashing

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What is the division method?

A

If k is a key and m is the size of the hash table, the hash function h() is calculated as:

h(k) = k mod m

For example, If the size of a hash table is 10 and k = 112 then h(k) = 112 mod 10 = 2. The value of m must not be the powers of 2. This is because the powers of 2 in binary format are 10, 100, 1000, …. When we find k mod m, we will always get the lower order p-bits.

if m = 22, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 01
if m = 23, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 001
if m = 24, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 0001
if m = 2p, then h(k) = p lower bits of m

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What is the multiplication method?

A

h(k) = ⌊m(kA mod 1)⌋

where,

kA mod 1 gives the fractional part kA,
⌊ ⌋ gives the floor value
A is any constant. The value of A lies between 0 and 1. But, an optimal choice will be ≈ (√5-1)/2 suggested by Knuth.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What is universal hashing?

A

In Universal hashing, the hash function is chosen at random independent of keys.

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What are the applications of hash tables?

A

Hash tables are implemented where

constant time lookup and insertion is required
cryptographic applications
indexing data is required

https://www.programiz.com/dsa/hash-table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Why is hashing needed?

A

After storing a large amount of data, we need to perform various operations on these data. Lookups are inevitable for the datasets. Linear search and binary search perform lookups/search with time complexity of O(n) and O(log n) respectively. As the size of the dataset increases, these complexities also become significantly high which is not acceptable.

We need a technique that does not depend on the size of data. Hashing allows lookups to occur in constant time i.e. O(1).

https://www.programiz.com/dsa/hashing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What is a heap?

A

Heap data structure is a complete binary tree that satisfies the heap property, where any given node is

always greater than its child node/s and the key of the root node is the largest among all other nodes. This property is also called max heap property.
always smaller than the child node/s and the key of the root node is the smallest among all other nodes. This property is also called min heap property.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What is “heapify”?

A

Heapify is the process of creating a heap data structure from a binary tree. It is used to create a Min-Heap or a Max-Heap.

It is used after insertion, removal, or deletion.

https://www.programiz.com/dsa/heap-data-structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What are the applications of heaps?

A

Heap is used while implementing a priority queue.
Dijkstra’s Algorithm
Heap Sort

https://www.programiz.com/dsa/heap-data-structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What is a Fibonacci heap?

A

A fibonacci heap is a data structure that consists of a collection of trees which follow min heap or max heap property. We have already discussed min heap and max heap property in the Heap Data Structure article. These two properties are the characteristics of the trees present on a fibonacci heap.

In a fibonacci heap, a node can have more than two children or no children at all. Also, it has more efficient heap operations than that supported by the binomial and binary heaps.

The fibonacci heap is called a fibonacci heap because the trees are constructed in a way such that a tree of order n has at least Fn+2 nodes in it, where Fn+2 is the (n + 2)th Fibonacci number.

https://www.programiz.com/dsa/fibonacci-heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What are the properties of a Fibonnaci heap?

A

Important properties of a Fibonacci heap are:

It is a set of min heap-ordered trees. (i.e. The parent is always smaller than the children.)
A pointer is maintained at the minimum element node.
It consists of a set of marked nodes. (Decrease key operation)
The trees within a Fibonacci heap are unordered but rooted.

https://www.programiz.com/dsa/fibonacci-heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

What is the memory representation of the nodes in a Fibonacci heap?

A

The roots of all the trees are linked together for faster access. The child nodes of a parent node are connected to each other through a circular doubly linked list as shown below.

There are two main advantages of using a circular doubly linked list.

Deleting a node from the tree takes O(1) time.
The concatenation of two such lists takes O(1) time.

https://www.programiz.com/dsa/fibonacci-heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

What is the complexity of a Fibonnaci heap?

A

Insertion
O(1)

			Find Min
				O(1)
		

		
			Union
				O(1)
		

		
			Extract Min
				O(log n)
		

		
			Decrease Key
				O(1)
		

		
			Delete Node
				O(log n)

https://www.programiz.com/dsa/fibonacci-heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What are the applications of Fibonnaci heaps?

A

To improve the asymptotic running time of Dijkstra’s algorithm.

https://www.programiz.com/dsa/fibonacci-heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What is a tree?

A

A tree is a nonlinear hierarchical data structure that consists of nodes connected by edges.

Other data structures such as arrays, linked list, stack, and queue are linear data structures that store data sequentially. In order to perform any operation in a linear data structure, the time complexity increases with the increase in the data size. But, it is not acceptable in today’s computational world.

Different tree data structures allow quicker and easier access to the data as it is a non-linear data structure.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

What is a tree node?

A

A node is an entity that contains a key or value and pointers to its child nodes.

The last nodes of each path are called leaf nodes or external nodes that do not contain a link/pointer to child nodes.

The node having at least a child node is called an internal node.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

What is a tree edge?

A

It is the link between any two nodes.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What is a tree root?

A

It is the topmost node of a tree.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What is the height of a tree node?

A

The height of a node is the number of edges from the node to the deepest leaf (ie. the longest path from the node to a leaf node).The height of a node is the number of edges from the node to the deepest leaf (ie. the longest path from the node to a leaf node).

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

What is the depth of a tree node?

A

The depth of a node is the number of edges from the root to the node.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What is the height of a tree?

A

The height of a Tree is the height of the root node or the depth of the deepest node.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

What is the degree of a tree node?

A

The degree of a node is the total number of branches of that node.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What is a forest?

A

A collection of disjoint trees is called a forest.

You can create a forest by cutting the root of a tree.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

What are some types of trees?

A

Binary Tree
Binary Search Tree
AVL Tree
B-Tree

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What is tree traversal?

A

In order to perform any operation on a tree, you need to reach to the specific node. The tree traversal algorithm helps in visiting a required node in the tree.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

What are the applications of trees?

A

Binary Search Trees(BSTs) are used to quickly check whether an element is present in a set or not.
Heap is a kind of tree that is used for heap sort.
A modified version of a tree called Tries is used in modern routers to store routing information.
Most popular databases use B-Trees and T-Trees, which are variants of the tree structure we learned above to store their data
Compilers use a syntax tree to validate the syntax of every program you write.

https://www.programiz.com/dsa/trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What are the ways to traverse a tree?

A

inorder
preorder
postorder

https://www.programiz.com/dsa/tree-traversal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What is inorder traversal?

A

First, visit all the nodes in the left subtree
Then the root node
Visit all the nodes in the right subtree

https://www.programiz.com/dsa/tree-traversal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What is preorder traversal?

A

Visit root node
Visit all the nodes in the left subtree
Visit all the nodes in the right subtree

https://www.programiz.com/dsa/tree-traversal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What is postorder traversal?

A

Visit all the nodes in the left subtree
Visit all the nodes in the right subtree
Visit the root node

https://www.programiz.com/dsa/tree-traversal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What is a binary tree?

A

A binary tree is a tree data structure in which each parent node can have at most two children. Each node of a binary tree consists of three items:

data item

address of left child

address of right child

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What are the types of binary tree?

A

full
perfect
complete
degenerate/pathological
skewed
balanced

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What is a full/proper binary tree?

A

A full Binary tree is a special type of binary tree in which every parent node/internal node has either two or no children.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What is a perfect binary tree?

A

A perfect binary tree is a type of binary tree in which every internal node has exactly two child nodes and all the leaf nodes are at the same level.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What is a complete binary tree?

A

A complete binary tree is just like a full binary tree, but with two major differences

Every level must be completely filled
All the leaf elements must lean towards the left.
The last leaf element might not have a right sibling i.e. a complete binary tree doesn't have to be a full binary tree.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

What is a degenerate/pathological binary tree?

A

A degenerate or pathological tree is the tree having a single child either left or right.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

What is a skewed binary tree?

A

A skewed binary tree is a pathological/degenerate tree in which the tree is either dominated by the left nodes or the right nodes. Thus, there are two types of skewed binary tree: left-skewed binary tree and right-skewed binary tree.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

What is a balanced binary tree?

A

It is a type of binary tree in which the difference between the height of the left and the right subtree for each node is either 0 or 1.

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What is the application of binary trees?

A

For easy and quick access to data
In router algorithms
To implement heap data structure
Syntax tree

https://www.programiz.com/dsa/binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

What are the full binary tree theorems?

A

Let, i = the number of internal nodes
n = be the total number of nodes
l = number of leaves
λ = number of levels

The number of leaves is i + 1.
The total number of nodes is 2i + 1.
The number of internal nodes is (n – 1) / 2.
The number of leaves is (n + 1) / 2.
The total number of nodes is 2l – 1.
The number of internal nodes is l – 1.
The number of leaves is at most 2λ - 1.

https://www.programiz.com/dsa/full-binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

What are the perfect binary tree theorems?

A

A perfect binary tree of height h has 2h + 1 – 1 node.
A perfect binary tree with n nodes has height log(n + 1) – 1 = Θ(ln(n)).
A perfect binary tree of height h has 2h leaf nodes.
The average depth of a node in a perfect binary tree is Θ(ln(n)).

https://www.programiz.com/dsa/perfect-binary-tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

What are the relationships between array indices and complete binary tree elements?

A

A complete binary tree has an interesting property that we can use to find the children and parents of any node.

If the index of any element in the array is i, the element in the index 2i+1 will become the left child and element in 2i+2 index will become the right child. Also, the parent of any element at index i is given by the lower bound of (i-1)/2.

https://www.programiz.com/dsa/complete-binary-tree

105
Q

What are the applications of a complete binary tree?

A

Heap-based data structures
Heap sort

https://www.programiz.com/dsa/complete-binary-tree

106
Q

What is the application of a balanced binary tree?

A

AVL tree
Balanced Binary Search Tree

https://www.programiz.com/dsa/balanced-binary-tree

107
Q

What is a binary search tree?

A

Binary search tree is a data structure that quickly allows us to maintain a sorted list of numbers.

It is called a binary tree because each tree node has a maximum of two children.
It is called a search tree because it can be used to search for the presence of a number in O(log(n)) time.

The properties that separate a binary search tree from a regular binary tree is

All nodes of left subtree are less than the root node
All nodes of right subtree are more than the root node
Both subtrees of each node are also BSTs i.e. they have the above two properties

https://www.programiz.com/dsa/binary-search-tree

108
Q

How do you insert a node into a binary search tree?

A

Inserting a value in the correct position is similar to searching because we try to maintain the rule that the left subtree is lesser than root and the right subtree is larger than root.

We keep going to either right subtree or left subtree depending on the value and when we reach a point left or right subtree is null, we put the new node there.

https://www.programiz.com/dsa/binary-search-tree

109
Q

How do you delete a node from a binary search tree?

A

There are three cases for deleting a node from a binary search tree.

Case I

In the first case, the node to be deleted is the leaf node. In such a case, simply delete the node from the tree.

Case II

In the second case, the node to be deleted lies has a single child node. In such a case follow the steps below:

Replace that node with its child node.
Remove the child node from its original position.

Case III

In the third case, the node to be deleted has two children. In such a case follow the steps below:

Get the inorder successor of that node.
Replace the node with the inorder successor.
Remove the inorder successor from its original position.

https://www.programiz.com/dsa/binary-search-tree

110
Q

What is the complexity of a binary search tree?

A

Time Complexity

			Operation
				Best Case Complexity
				Average Case Complexity
				Worst Case Complexity
		

		
			Search
				O(log n)
				O(log n)
				O(n)
		

		
			Insertion
				O(log n)
				O(log n)
				O(n)
		

		
			Deletion
				O(log n)
				O(log n)
				O(n)

Here, n is the number of nodes in the tree.
Space Complexity

The space complexity for all the operations is O(n).

https://www.programiz.com/dsa/binary-search-tree

111
Q

What are the applications of binary search trees?

A

In multilevel indexing in the database
For dynamic sorting
For managing virtual memory areas in Unix kernel

https://www.programiz.com/dsa/binary-search-tree

112
Q

What is an AVL tree?

A

AVL tree is a self-balancing binary search tree in which each node maintains extra information called a balance factor whose value is either -1, 0 or +1.

AVL tree got its name after its inventor Georgy Adelson-Velsky and Landis.

https://www.programiz.com/dsa/avl-tree

113
Q

What is the balance factor of a node in an AVL tree?

A

Balance factor of a node in an AVL tree is the difference between the height of the left subtree and that of the right subtree of that node.

Balance Factor = (Height of Left Subtree - Height of Right Subtree) or (Height of Right Subtree - Height of Left Subtree)

The self balancing property of an avl tree is maintained by the balance factor. The value of balance factor should always be -1, 0 or +1.

https://www.programiz.com/dsa/avl-tree

114
Q

How do you rotate the subtrees in an AVL tree?

A

In left-rotation, the arrangement of the nodes on the right is transformed into the arrangements on the left node.

In left-rotation, the arrangement of the nodes on the left is transformed into the arrangements on the right node.

In left-right rotation, the arrangements are first shifted to the left and then to the right.

In right-left rotation, the arrangements are first shifted to the right and then to the left.

https://www.programiz.com/dsa/avl-tree

115
Q

How do you insert a node into an AVL tree?

A

A newNode is always inserted as a leaf node with balance factor equal to 0.

See the link for the algorithm.

https://www.programiz.com/dsa/avl-tree

116
Q

How do you delete a node in an AVL tree?

A

A node is always deleted as a leaf node. After deleting a node, the balance factors of the nodes get changed. In order to rebalance the balance factor, suitable rotations are performed.

See the link for the algorithm.

https://www.programiz.com/dsa/avl-tree

117
Q

What is the complexity of an AVL tree?

A

Insertion
Deletion
Search

			O(log n)
				O(log n)
				O(log n)

https://www.programiz.com/dsa/avl-tree

118
Q

What are the applications of AVL trees?

A

For indexing large records in databases
For searching in large databases

https://www.programiz.com/dsa/avl-tree

119
Q

What is a B-tree?

A

B-tree is a special type of self-balancing search tree in which each node can contain more than one key and can have more than two children. It is a generalized form of the binary search tree.

It is also known as a height-balanced m-way tree.

The need for B-tree arose with the rise in the need for lesser time in accessing physical storage media like a hard disk. The secondary storage devices are slower with a larger capacity. There was a need for such types of data structures that minimize the disk access.

Other data structures such as a binary search tree, avl tree, red-black tree, etc can store only one key in one node. If you have to store a large number of keys, then the height of such trees becomes very large, and the access time increases.

However, B-tree can store many keys in a single node and can have multiple child nodes. This decreases the height significantly allowing faster disk accesses.

https://www.programiz.com/dsa/b-tree

120
Q

What are the properties of a B-tree?

A

For each node x, the keys are stored in increasing order.
In each node, there is a boolean value x.leaf which is true if x is a leaf.
If n is the order of the tree, each internal node can contain at most n - 1 keys along with a pointer to each child.
Each node except root can have at most n children and at least n/2 children.
All leaves have the same depth (i.e. height-h of the tree).
The root has at least 2 children and contains a minimum of 1 key.
If n ≥ 1, then for any n-key B-tree of height h and minimum degree t ≥ 2, h ≥ logt (n+1)/2.

https://www.programiz.com/dsa/b-tree

121
Q

What is the search complexity of a B-tree?

A

Worst case Time complexity: Θ(log n)

Average case Time complexity: Θ(log n)

Best case Time complexity: Θ(log n)

Average case Space complexity: Θ(n)

Worst case Space complexity: Θ(n)

https://www.programiz.com/dsa/b-tree

122
Q

What are the applications of a B-tree?

A

databases and file systems
to store blocks of data (secondary storage media)
multilevel indexing

https://www.programiz.com/dsa/b-tree

123
Q

How do you insert and element into a B-tree?

A

Inserting an element on a B-tree consists of two events: searching the appropriate node to insert the element and splitting the node if required.Insertion operation always takes place in the bottom-up approach.

See the link.

https://www.programiz.com/dsa/insertion-into-a-b-tree

124
Q

How do you delete and element from a B-tree?

A

Deleting an element on a B-tree consists of three main events: searching the node where the key to be deleted exists, deleting the key and balancing the tree if required.

While deleting a tree, a condition called underflow may occur. Underflow occurs when a node contains less than the minimum number of keys it should hold.

The terms to be understood before studying deletion operation are:

Inorder Predecessor
The largest key on the left child of a node is called its inorder predecessor.
Inorder Successor
The smallest key on the right child of a node is called its inorder successor.

See the link.

https://www.programiz.com/dsa/deletion-from-a-b-tree

125
Q

What is the deletion complexity of a B-tree?

A

Best case Time complexity: Θ(log n)

Average case Space complexity: Θ(n)

Worst case Space complexity: Θ(n)

https://www.programiz.com/dsa/b-plus-tree

126
Q

What is a B+ tree?

A

A B+ tree is an advanced form of a self-balancing tree in which all the values are present in the leaf level.

An important concept to be understood before learning B+ tree is multilevel indexing. In multilevel indexing, the index of indices is created as in figure below. It makes accessing the data easier and faster.

All leaves are at the same level.
The root has at least two children.
Each node except root can have a maximum of m children and at least m/2 children.
Each node can contain a maximum of m - 1 keys and a minimum of ⌈m/2⌉ - 1 keys.

https://www.programiz.com/dsa/b-plus-tree

127
Q

Compare B and B+ trees.

A

The data pointers are present only at the leaf nodes on a B+ tree whereas the data pointers are present in the internal, leaf or root nodes on a B-tree.

The leaves are not connected with each other on a B-tree whereas they are connected on a B+ tree.

Operations on a B+ tree are faster than on a B-tree.

https://www.programiz.com/dsa/b-plus-tree

128
Q

What is the search complexity of a B+ tree?

A

Time Complexity

If linear search is implemented inside a node, then total complexity is Θ(logt n).

If binary search is used, then total complexity is Θ(log2t.logt n).

https://www.programiz.com/dsa/b-plus-tree

129
Q

What are the applications of B+ trees?

A

Multilevel Indexing
Faster operations on the tree (insertion, deletion, search)
Database indexing

https://www.programiz.com/dsa/b-plus-tree

130
Q

How do you insert nodes into a B+ tree?

A

Inserting an element into a B+ tree consists of three main events: searching the appropriate leaf, inserting the element and balancing/splitting the tree.

See the link.

https://www.programiz.com/dsa/insertion-on-a-b-plus-tree

131
Q

What is the insertion complexity for a B+ tree?

A

Time complexity: Θ(t.logt n)

The complexity is dominated by Θ(logt n).

https://www.programiz.com/dsa/insertion-on-a-b-plus-tree

132
Q

How do you delete nodes from a B+ tree?

A

Deleting an element on a B+ tree consists of three main events: searching the node where the key to be deleted exists, deleting the key and balancing the tree if required.Underflow is a situation when there is less number of keys in a node than the minimum number of keys it should hold.

See the link.

https://www.programiz.com/dsa/deletion-from-a-b-plus-tree

133
Q

What is a Red-Black tree?

A

Red-Black tree is a self-balancing binary search tree in which each node contains an extra bit for denoting the color of the node, either red or black.

A red-black tree satisfies the following properties:

Red/Black Property: Every node is colored, either red or black.
Root Property: The root is black.
Leaf Property: Every leaf (NIL) is black.
Red Property: If a red node has children then, the children are always black.
Depth Property: For each node, any simple path from this node to any of its descendant leaf has the same black-depth (the number of black nodes).

https://www.programiz.com/dsa/red-black-tree

134
Q

How does a red-black tree maintain self-balancing?

A

The red-black color is meant for balancing the tree.

The limitations put on the node colors ensure that any simple path from the root to a leaf is not more than twice as long as any other such path. It helps in maintaining the self-balancing property of the red-black tree.

https://www.programiz.com/dsa/red-black-tree

135
Q

How do you perform rotations on a red-black tree?

A

see https://www.programiz.com/dsa/red-black-tree

136
Q

What are the applications of red-black trees?

A

To implement finite maps
To implement Java packages: java.util.TreeMap and java.util.TreeSet
To implement Standard Template Libraries (STL) in C++: multiset, map, multimap
In Linux Kernel

https://www.programiz.com/dsa/red-black-tree

137
Q

How do you insert a node into a red-black tree?

A

https://www.programiz.com/dsa/insertion-in-a-red-black-tree

138
Q

How do you delete a node from a red-black tree?

A

https://www.programiz.com/dsa/deletion-from-a-red-black-tree

139
Q

What is a graph data structure?

A

A graph data structure is a collection of nodes that have data and are connected to other nodes.

More precisely, a graph is a data structure (V, E) that consists of

A collection of vertices V
A collection of edges E, represented as ordered pairs of vertices (u,v)

https://www.programiz.com/dsa/graph

140
Q

What is adjacency in a graph data structure?

A

Adjacency: A vertex is said to be adjacent to another vertex if there is an edge connecting them. Vertices 2 and 3 are not adjacent because there is no edge between them.

https://www.programiz.com/dsa/graph

141
Q

What is a path in a graph data structure?

A

Path: A sequence of edges that allows you to go from vertex A to vertex B is called a path. 0-1, 1-2 and 0-2 are paths from vertex 0 to vertex 2.

https://www.programiz.com/dsa/graph

142
Q

What is a directed graph?

A

Directed Graph: A graph in which an edge (u,v) doesn’t necessarily mean that there is an edge (v, u) as well. The edges in such a graph are represented by arrows to show the direction of the edge.

https://www.programiz.com/dsa/graph

143
Q

What are the different graph representations?

A

adjacency matrix
adjacency list

https://www.programiz.com/dsa/graph

144
Q

What are the most common graph operations?

A

The most common graph operations are:

Check if the element is present in the graph
Graph Traversal
Add elements(vertex, edges) to graph
Finding the path from one vertex to another

https://www.programiz.com/dsa/graph

145
Q

What is an undirected graph?

A

An undirected graph is a graph in which the edges do not point in any direction (ie. the edges are bidirectional).

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

146
Q

What is a connected graph?

A

A connected graph is a graph in which there is always a path from a vertex to any other vertex.

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

147
Q

What is a spanning tree?

A

A spanning tree is a sub-graph of an undirected connected graph, which includes all the vertices of the graph with a minimum possible number of edges. If a vertex is missed, then it is not a spanning tree.

The edges may or may not have weights assigned to them.

The total number of spanning trees with n vertices that can be created from a complete graph is equal to n(n-2).

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

148
Q

What is a minimum spanning tree?

A

A minimum spanning tree is a spanning tree in which the sum of the weight of the edges is as minimum as possible.

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

149
Q

What are the applications of spanning trees?

A

Computer Network Routing Protocol
Cluster Analysis
Civil Network Planning

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

150
Q

What are the applications of minimal spanning trees?

A

To find paths in the map
To design networks like telecommunication networks, water supply networks, and electrical grids.

https://www.programiz.com/dsa/spanning-tree-and-minimum-spanning-tree

151
Q

What is a strongly connected component?

A

A strongly connected component is the portion of a directed graph in which there is a path from each vertex to another vertex. It is applicable only on a directed graph.

ou can observe that in the first strongly connected component, every vertex can reach the other vertex through the directed path.

These components can be found using Kosaraju’s Algorithm.

https://www.programiz.com/dsa/strongly-connected-components

152
Q

What is Kosaraju’s algorithm?

A

Kosaraju’s Algorithm is based on the depth-first search algorithm implemented twice.

See the link.

https://www.programiz.com/dsa/strongly-connected-components

153
Q

What is the complexity of Kosaraju’s algorithm?

A

Kosaraju’s algorithm runs in linear time i.e. O(V+E).

https://www.programiz.com/dsa/strongly-connected-components

154
Q

What are the applications of strongly connected components?

A

Vehicle routing applications
Maps
Model-checking in formal verification

https://www.programiz.com/dsa/strongly-connected-components

155
Q

What is an adjacency matrix?

A

An adjacency matrix is a way of representing a graph as a matrix of booleans (0’s and 1’s). A finite graph can be represented in the form of a square matrix on a computer, where the boolean value of the matrix indicates if there is a direct path between two vertices.

https://www.programiz.com/dsa/graph-adjacency-matrix

156
Q

What are the pros and cons of adjacency matrices?

A

Pros of Adjacency Matrix

The basic operations like adding an edge, removing an edge, and checking whether there is an edge from vertex i to vertex j are extremely time efficient, constant time operations.
If the graph is dense and the number of edges is large, an adjacency matrix should be the first choice. Even if the graph and the adjacency matrix is sparse, we can represent it using data structures for sparse matrices.
The biggest advantage, however, comes from the use of matrices. The recent advances in hardware enable us to perform even expensive matrix operations on the GPU.
By performing operations on the adjacent matrix, we can get important insights into the nature of the graph and the relationship between its vertices.

Cons of Adjacency Matrix

The VxV space requirement of the adjacency matrix makes it a memory hog. Graphs out in the wild usually don't have too many connections and this is the major reason why adjacency lists are the better choice for most tasks.
While basic operations are easy, operations like inEdges and outEdges are expensive when using the adjacency matrix representation.

https://www.programiz.com/dsa/graph-adjacency-matrix

157
Q

What are the applications of adjacency matrices?

A

Creating routing table in networks
Navigation tasks

https://www.programiz.com/dsa/graph-adjacency-matrix

158
Q

What is an adjacency list?

A

An adjacency list represents a graph as an array of linked lists. The index of the array represents a vertex and each element in its linked list represents the other vertices that form an edge with the vertex.

https://www.programiz.com/dsa/graph-adjacency-list

159
Q

What are the pros and cons of adjacency lists?

A

Pros of Adjacency List

An adjacency list is efficient in terms of storage because we only need to store the values for the edges. For a sparse graph with millions of vertices and edges, this can mean a lot of saved space.
It also helps to find all the vertices adjacent to a vertex easily.

Cons of Adjacency List

Finding the adjacent list is not quicker than the adjacency matrix because all the connected nodes must be first explored to find them.

https://www.programiz.com/dsa/graph-adjacency-list

160
Q

What are the applications of adjacency lists?

A

It is faster to use adjacency lists for graphs having less number of edges.

https://www.programiz.com/dsa/graph-adjacency-list

161
Q

What is depth first search?

A

Depth first Search or Depth first traversal is a recursive algorithm for searching all the vertices of a graph or tree data structure. Traversal means visiting all the nodes of a graph.

https://www.programiz.com/dsa/graph-dfs

162
Q

What is the complexity of depth first search?

A

The time complexity of the DFS algorithm is represented in the form of O(V + E), where V is the number of nodes and E is the number of edges.

The space complexity of the algorithm is O(V).

https://www.programiz.com/dsa/graph-dfs

163
Q

What are the applications of depth first search?

A

For finding the path
To test if the graph is bipartite
For finding the strongly connected components of a graph
For detecting cycles in a graph

https://www.programiz.com/dsa/graph-dfs

164
Q

What is breadth first search?

A

Traversal means visiting all the nodes of a graph. Breadth First Traversal or Breadth First Search is a recursive algorithm for searching all the vertices of a graph or tree data structure.

https://www.programiz.com/dsa/graph-bfs

165
Q

What is the complexity of breadth first search?

A

The time complexity of the BFS algorithm is represented in the form of O(V + E), where V is the number of nodes and E is the number of edges.

The space complexity of the algorithm is O(V).

https://www.programiz.com/dsa/graph-bfs

166
Q

What are the applications of breadth first search?

A

To build index by search index
For GPS navigation
Path finding algorithms
In Ford-Fulkerson algorithm to find maximum flow in a network
Cycle detection in an undirected graph
In minimum spanning tree

https://www.programiz.com/dsa/graph-bfs

167
Q

What is the Bellman Ford’s algorithm?

A

Bellman Ford algorithm helps us find the shortest path from a vertex to all other vertices of a weighted graph.

It is similar to Dijkstra’s algorithm but it can work with graphs in which edges can have negative weights.

https://www.programiz.com/dsa/bellman-ford-algorithm

168
Q

Why would one ever have edges with negative weights in real life?

A

Negative weight edges might seem useless at first but they can explain a lot of phenomena like cashflow, the heat released/absorbed in a chemical reaction, etc.

For instance, if there are different ways to reach from one chemical A to another chemical B, each method will have sub-reactions involving both heat dissipation and absorption.

If we want to find the set of reactions where minimum energy is required, then we will need to be able to factor in the heat absorption as negative weights and heat dissipation as positive weights.

https://www.programiz.com/dsa/bellman-ford-algorithm

169
Q

Why do we need to be careful with negative weights?

A

Negative weight edges can create negative weight cycles i.e. a cycle that will reduce the total path distance by coming back to the same point.

Shortest path algorithms like Dijkstra’s Algorithm that aren’t able to detect such a cycle can give an incorrect result because they can go through a negative weight cycle and reduce the path length.

https://www.programiz.com/dsa/bellman-ford-algorithm

170
Q

What is the difference between Bellman Ford and Dijkstra?

A

Bellman Ford’s algorithm and Dijkstra’s algorithm are very similar in structure. While Dijkstra looks only to the immediate neighbors of a vertex, Bellman goes through each edge in every iteration.

https://www.programiz.com/dsa/bellman-ford-algorithm

171
Q

What is the complexity of Bellman Ford’s algorithm?

A

Time Complexity

		Best Case Complexity
			O(E)
	

	
		Average Case Complexity
			O(VE)
	

	
		Worst Case Complexity
			O(VE)

Space Complexity

And, the space complexity is O(V).

https://www.programiz.com/dsa/bellman-ford-algorithm

172
Q

What are the applications of Bellman Ford’s algorithm?

A

For calculating shortest paths in routing algorithms
For finding the shortest path

https://www.programiz.com/dsa/bellman-ford-algorithm

173
Q

What is bubble sort?

A

Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until they are in the intended order.

Just like the movement of air bubbles in the water that rise up to the surface, each element of the array move to the end in each iteration. Therefore, it is called a bubble sort.

https://www.programiz.com/dsa/bubble-sort

174
Q

What is optimized bubble sort?

A

In the above algorithm, all the comparisons are made even if the array is already sorted.

This increases the execution time.

To solve this, we can introduce an extra variable swapped. The value of swapped is set true if there occurs swapping of elements. Otherwise, it is set false.

After an iteration, if there is no swapping, the value of swapped will be false. This means elements are already sorted and there is no need to perform further iterations.

This will reduce the execution time and helps to optimize the bubble sort.

https://www.programiz.com/dsa/bubble-sort

175
Q

What is bubble sort complexity?

A

Time Complexity

			Best
				O(n)
		

		
			Worst
				O(n2)
		

		
			Average
				O(n2)
		

		
			Space Complexity
				O(1)
		

		
			Stability
				Yes
  1. Time ComplexitiesWorst Case Complexity: O(n2)
    If we want to sort in ascending order and the array is in descending order then the worst case occurs.
    Best Case Complexity: O(n)
    If the array is already sorted, then there is no need for sorting.
    Average Case Complexity: O(n2)
    It occurs when the elements of the array are in jumbled order (neither ascending nor descending).
  2. Space ComplexitySpace complexity is O(1) because an extra variable is used for swapping.
    In the optimized bubble sort algorithm, two extra variables are used. Hence, the space complexity will be O(2).

https://www.programiz.com/dsa/bubble-sort

176
Q

What are the applications of bubble sort?

A

Bubble sort is used if

complexity does not matter
short and simple code is preferred

https://www.programiz.com/dsa/bubble-sort

177
Q

What is selection sort?

A

Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in each iteration and places that element at the beginning of the unsorted list.

https://www.programiz.com/dsa/selection-sort

178
Q

What is the complexity of selection sort?

A

Time Complexity

			Best
				O(n2)
		

		
			Worst
				O(n2)
		

		
			Average
				O(n2)
		

		
			Space Complexity
				O(1)
		

		
			Stability
				No

Time Complexities:

Worst Case Complexity: O(n2)
If we want to sort in ascending order and the array is in descending order then, the worst case occurs.
Best Case Complexity: O(n2)
It occurs when the array is already sorted
Average Case Complexity: O(n2)
It occurs when the elements of the array are in jumbled order (neither ascending nor descending).

The time complexity of the selection sort is the same in all cases. At every step, you have to find the minimum element and put it in the right place. The minimum element is not known until the end of the array is not reached.

Space Complexity:

Space complexity is O(1) because an extra variable temp is used.

https://www.programiz.com/dsa/selection-sort

179
Q

What are the applications of selection sort?

A

The selection sort is used when

a small list is to be sorted
cost of swapping does not matter
checking of all the elements is compulsory
cost of writing to a memory matters like in flash memory (number of writes/swaps is O(n) as compared to O(n2) of bubble sort)

https://www.programiz.com/dsa/selection-sort

180
Q

What is insertion sort?

A

Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each iteration.

Insertion sort works similarly as we sort cards in our hand in a card game.

We assume that the first card is already sorted then, we select an unsorted card. If the unsorted card is greater than the card in hand, it is placed on the right otherwise, to the left. In the same way, other unsorted cards are taken and put in their right place.

A similar approach is used by insertion sort.

https://www.programiz.com/dsa/insertion-sort

181
Q

What is insertion sort complexity?

A

Time Complexity

			Best
				O(n)
		

		
			Worst
				O(n2)
		

		
			Average
				O(n2)
		

		
			Space Complexity
				O(1)
		

		
			Stability
				Yes

Time Complexities

Worst Case Complexity: O(n2)
Suppose, an array is in ascending order, and you want to sort it in descending order. In this case, worst case complexity occurs.

Each element has to be compared with each of the other elements so, for every nth element, (n-1) number of comparisons are made.

Thus, the total number of comparisons = n*(n-1) ~ n2
Best Case Complexity: O(n)
When the array is already sorted, the outer loop runs for n number of times whereas the inner loop does not run at all. So, there are only n number of comparisons. Thus, complexity is linear.
Average Case Complexity: O(n2)
It occurs when the elements of an array are in jumbled order (neither ascending nor descending).

Space Complexity

Space complexity is O(1) because an extra variable key is used.

https://www.programiz.com/dsa/insertion-sort

182
Q

What are the applications of insertion sort?

A

The insertion sort is used when:

the array is has a small number of elements
there are only a few elements left to be sorted

https://www.programiz.com/dsa/insertion-sort

183
Q

What is merge sort?

A

Merge Sort is one of the most popular sorting algorithms that is based on the principle of Divide and Conquer Algorithm.

Here, a problem is divided into multiple sub-problems. Each sub-problem is solved individually. Finally, sub-problems are combined to form the final solution.

https://www.programiz.com/dsa/merge-sort

184
Q

What is the divide and conquer strategy?

A

Using the Divide and Conquer technique, we divide a problem into subproblems. When the solution to each subproblem is ready, we ‘combine’ the results from the subproblems to solve the main problem.

https://www.programiz.com/dsa/merge-sort

185
Q

What is the complexity of merge sort?

A

Time Complexity

			Best
				O(n*log n)
		

		
			Worst
				O(n*log n)
		

		
			Average
				O(n*log n)
		

		
			Space Complexity
				O(n)
		

		
			Stability
				Yes

Time Complexity

Best Case Complexity: O(n*log n)

Worst Case Complexity: O(n*log n)

Average Case Complexity: O(n*log n)
Space Complexity

The space complexity of merge sort is O(n).

https://www.programiz.com/dsa/merge-sort

186
Q

What are the applications of merge sort?

A

Inversion count problem
External sorting
E-commerce applications

https://www.programiz.com/dsa/merge-sort

187
Q

What is quicksort?

A

Quicksort is a sorting algorithm based on the divide and conquer approach where

An array is divided into subarrays by selecting a pivot element (element selected from the array).

While dividing the array, the pivot element should be positioned in such a way that elements less than pivot are kept on the left side and elements greater than pivot are on the right side of the pivot.
The left and right subarrays are also divided using the same approach. This process continues until each subarray contains a single element.
At this point, elements are already sorted. Finally, elements are combined to form a sorted array.

https://www.programiz.com/dsa/quick-sort

188
Q

What is the complexity of quicksort?

A

Time Complexity

			Best
				O(n*log n)
		

		
			Worst
				O(n2)
		

		
			Average
				O(n*log n)
		

		
			Space Complexity
				O(log n)
		

		
			Stability
				No
  1. Time ComplexitiesWorst Case Complexity [Big-O]: O(n2)
    It occurs when the pivot element picked is either the greatest or the smallest element.This condition leads to the case in which the pivot element lies in an extreme end of the sorted array. One sub-array is always empty and another sub-array contains n - 1 elements. Thus, quicksort is called only on this sub-array.However, the quicksort algorithm has better performance for scattered pivots.Best Case Complexity [Big-omega]: O(nlog n)
    It occurs when the pivot element is always the middle element or near to the middle element.
    Average Case Complexity [Big-theta]: O(n
    log n)
    It occurs when the above conditions do not occur.
  2. Space Complexity

The space complexity for quicksort is O(log n).

https://www.programiz.com/dsa/quick-sort

189
Q

What are the applications of quicksort?

A

Quicksort algorithm is used when

the programming language is good for recursion
time complexity matters
space complexity matters

https://www.programiz.com/dsa/quick-sort

190
Q

What is counting sort?

A

Counting sort is a sorting algorithm that sorts the elements of an array by counting the number of occurrences of each unique element in the array. The count is stored in an auxiliary array and the sorting is done by mapping the count as an index of the auxiliary array.

https://www.programiz.com/dsa/counting-sort

191
Q

What is the complexity of counting sort?

A

Time Complexity

			Best
				O(n+k)
		

		
			Worst
				O(n+k)
		

		
			Average
				O(n+k)
		

		
			Space Complexity
				O(max)
		

		
			Stability
				Yes

Time Complexities

There are mainly four main loops. (Finding the greatest value can be done outside the function.)

			for-loop
				time of counting
		

		
			1st
				O(max)
		

		
			2nd
				O(size)
		

		
			3rd
				O(max)
		

		
			4th
				O(size)

Overall complexity = O(max)+O(size)+O(max)+O(size) = O(max+size)

Worst Case Complexity: O(n+k)
Best Case Complexity: O(n+k)
Average Case Complexity: O(n+k)

In all the above cases, the complexity is the same because no matter how the elements are placed in the array, the algorithm goes through n+k times.

There is no comparison between any elements, so it is better than comparison based sorting techniques. But, it is bad if the integers are very large because the array of that size should be made.

Space Complexity

The space complexity of Counting Sort is O(max). Larger the range of elements, larger is the space complexity.

https://www.programiz.com/dsa/counting-sort

192
Q

What are the applications of counting sort?

A

Counting sort is used when:

there are smaller integers with multiple counts.
linear complexity is the need.

https://www.programiz.com/dsa/counting-sort

193
Q

What is radix sort?

A

Radix sort is a sorting algorithm that sorts the elements by first grouping the individual digits of the same place value. Then, sort the elements according to their increasing/decreasing order.

https://www.programiz.com/dsa/radix-sort

194
Q

What is radix sort’s complexity?

A

Time Complexity

			Best
				O(n+k)
		

		
			Worst
				O(n+k)
		

		
			Average
				O(n+k)
		

		
			Space Complexity
				O(max)
		

		
			Stability
				Yes

Since radix sort is a non-comparative algorithm, it has advantages over comparative sorting algorithms.

For the radix sort that uses counting sort as an intermediate stable sort, the time complexity is O(d(n+k)).

Here, d is the number cycle and O(n+k) is the time complexity of counting sort.

Thus, radix sort has linear time complexity which is better than O(nlog n) of comparative sorting algorithms.

If we take very large digit numbers or the number of other bases like 32-bit and 64-bit numbers then it can perform in linear time however the intermediate sort takes large space.

This makes radix sort space inefficient. This is the reason why this sort is not used in software libraries.

https://www.programiz.com/dsa/radix-sort

195
Q

What are radix sort’s applications?

A

Radix sort is implemented in

DC3 algorithm (Kärkkäinen-Sanders-Burkhardt) while making a suffix array.
places where there are numbers in large ranges.

https://www.programiz.com/dsa/radix-sort

196
Q

What is bucket sort?

A

Bucket Sort is a sorting algorithm that divides the unsorted array elements into several groups called buckets. Each bucket is then sorted by using any of the suitable sorting algorithms or recursively applying the same bucket algorithm.

Finally, the sorted buckets are combined to form a final sorted array.

https://www.programiz.com/dsa/bucket-sort

197
Q

What is the scatter gather approach?

A

The process of bucket sort can be understood as a scatter-gather approach. Here, elements are first scattered into buckets then the elements in each bucket are sorted. Finally, the elements are gathered in order.

https://www.programiz.com/dsa/bucket-sort

198
Q

What is bucket sort’s complexity?

A

Time Complexity

			Best
				O(n+k)
		

		
			Worst
				O(n2)
		

		
			Average
				O(n)
		

		
			Space Complexity
				O(n+k)
		

		
			Stability
				Yes

Worst Case Complexity: O(n2)
When there are elements of close range in the array, they are likely to be placed in the same bucket. This may result in some buckets having more number of elements than others.
It makes the complexity depend on the sorting algorithm used to sort the elements of the bucket.
The complexity becomes even worse when the elements are in reverse order. If insertion sort is used to sort elements of the bucket, then the time complexity becomes O(n2).
Best Case Complexity: O(n+k)
It occurs when the elements are uniformly distributed in the buckets with a nearly equal number of elements in each bucket.
The complexity becomes even better if the elements inside the buckets are already sorted.
If insertion sort is used to sort elements of a bucket then the overall complexity in the best case will be linear ie. O(n+k). O(n) is the complexity for making the buckets and O(k) is the complexity for sorting the elements of the bucket using algorithms having linear time complexity at the best case.
Average Case Complexity: O(n)
It occurs when the elements are distributed randomly in the array. Even if the elements are not distributed uniformly, bucket sort runs in linear time. It holds true until the sum of the squares of the bucket sizes is linear in the total number of elements.

https://www.programiz.com/dsa/bucket-sort

199
Q

What are the applications of bucket sort?

A

Bucket sort is used when:

input is uniformly distributed over a range.
there are floating point values

https://www.programiz.com/dsa/bucket-sort

200
Q

What is heap sort?

A

Heap Sort is a popular and efficient sorting algorithm in computer programming. Learning how to write the heap sort algorithm requires knowledge of two types of data structures - arrays and trees.

https://www.programiz.com/dsa/heap-sort

201
Q

What is the complexity of heap sort?

A

Time Complexity

			Best
				O(nlog n)
		

		
			Worst
				O(nlog n)
		

		
			Average
				O(nlog n)
		

		
			Space Complexity
				O(1)
		

		
			Stability
				No

https://www.programiz.com/dsa/heap-sort

202
Q

What are the applications of heap sort?

A

Systems concerned with security and embedded systems such as Linux Kernel use Heap Sort because of the O(n log n) upper bound on Heapsort’s running time and constant O(1) upper bound on its auxiliary storage.

Although Heap Sort has O(n log n) time complexity even for the worst case, it doesn’t have more applications ( compared to other sorting algorithms like Quick Sort, Merge Sort ). However, its underlying data structure, heap, can be efficiently used if we want to extract the smallest (or largest) from the list of items without the overhead of keeping the remaining items in the sorted order. For e.g Priority Queues.

https://www.programiz.com/dsa/heap-sort

203
Q

What is shell sort?

A

Shell sort is a generalized version of the insertion sort algorithm. It first sorts elements that are far apart from each other and successively reduces the interval between the elements to be sorted.

The interval between the elements is reduced based on the sequence used. Some of the optimal sequences that can be used in the shell sort algorithm are:

Shell’s original sequence: N/2 , N/4 , …, 1

https://www.programiz.com/dsa/shell-sort

204
Q

What is Shell’s original shell sort sequence?

A

Shell’s original sequence: N/2 , N/4 , …, 1

https://www.programiz.com/dsa/shell-sort

205
Q

What are Knuth’s shell sort increments?

A

Knuth’s increments: 1, 4, 13, …, (3k – 1) / 2

https://www.programiz.com/dsa/shell-sort

206
Q

What are Sedgewick’s shell sort increments?

A

Sedgewick’s increments: 1, 8, 23, 77, 281, 1073, 4193, 16577…4j+1+ 3·2j+ 1

https://www.programiz.com/dsa/shell-sort

207
Q

What are Hibbard’s shell sort increments?

A

Hibbard’s increments: 1, 3, 7, 15, 31, 63, 127, 255, 511…

https://www.programiz.com/dsa/shell-sort

208
Q

What are Papernov & Stasevich’s shell sort increments?

A

Papernov & Stasevich increment: 1, 3, 5, 9, 17, 33, 65,…

https://www.programiz.com/dsa/shell-sort

209
Q

What are Pratt’s shell sort increments?

A

Pratt: 1, 2, 3, 4, 6, 9, 8, 12, 18, 27, 16, 24, 36, 54, 81….

https://www.programiz.com/dsa/shell-sort

210
Q

What is the complexity of shell sort?

A

Time Complexity

			Best
				O(nlog n)
		

		
			Worst
				O(n2)
		

		
			Average
				O(nlog n)
		

		
			Space Complexity
				O(1)
		

		
			Stability
				No

Shell sort is an unstable sorting algorithm because this algorithm does not examine the elements lying in between the intervals.
Time Complexity

Worst Case Complexity: less than or equal to O(n2)
Worst case complexity for shell sort is always less than or equal to O(n2).

According to Poonen Theorem, worst case complexity for shell sort is Θ(Nlog N)2/(log log N)2) or Θ(Nlog N)2/log log N) or Θ(N(log N)2) or something in between.
Best Case Complexity: O(n*log n)
When the array is already sorted, the total number of comparisons for each interval (or increment) is equal to the size of the array.
Average Case Complexity: O(n*log n)
It is around O(n1.25).

The complexity depends on the interval chosen. The above complexities differ for different increment sequences chosen. Best increment sequence is unknown.
Space Complexity:

The space complexity for shell sort is O(1).

https://www.programiz.com/dsa/shell-sort

211
Q

What is shell sort’s applications?

A

Shell sort is used when:

calling a stack is overhead. uClibc library uses this sort.
recursion exceeds a limit. bzip2 compressor uses it.
Insertion sort does not perform well when the close elements are far apart. Shell sort helps in reducing the distance between the close elements. Thus, there will be less number of swappings to be performed.

https://www.programiz.com/dsa/shell-sort

212
Q

What is linear search?

A

Linear search is a sequential searching algorithm where we start from one end and check every element of the list until the desired element is found. It is the simplest searching algorithm.

https://www.programiz.com/dsa/linear-search

213
Q

What is the complexity of linear search?

A

Time Complexity: O(n)

Space Complexity: O(1)

https://www.programiz.com/dsa/linear-search

214
Q

What are the applications of linear search?

A

For searching operations in smaller arrays (<100 items).

https://www.programiz.com/dsa/linear-search

215
Q

What is binary search?

A

Binary Search is a searching algorithm for finding an element’s position in a sorted array.

In this approach, the element is always searched in the middle of a portion of an array.

Binary search can be implemented only on a sorted list of items. If the elements are not sorted already, we need to sort them first.

https://www.programiz.com/dsa/binary-search

216
Q

What are the two methods for implementing binary search?

A

iteration method
recursive method

https://www.programiz.com/dsa/binary-search

217
Q

What is the complexity of binary search?

A

Time Complexities

Best case complexity: O(1)
Average case complexity: O(log n)
Worst case complexity: O(log n)

Space Complexity

The space complexity of the binary search is O(1).

https://www.programiz.com/dsa/binary-search

218
Q

What are the applications of binary search?

A

In libraries of Java, .Net, C++ STL
While debugging, the binary search is used to pinpoint the place where the error happens.

https://www.programiz.com/dsa/binary-search

219
Q

What is a greedy algorithm?

A

A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It doesn’t worry whether the current best result will bring the overall optimal result.

The algorithm never reverses the earlier decision even if the choice is wrong. It works in a top-down approach.

This algorithm may not produce the best result for all the problems. It’s because it always goes for the local best choice to produce the global best result.

However, we can determine if the algorithm can be used with any problem if the problem has the following properties:

  1. Greedy Choice Property

If an optimal solution to the problem can be found by choosing the best choice at each step without reconsidering the previous steps once chosen, the problem can be solved using a greedy approach. This property is called greedy choice property.

  1. Optimal Substructure

If the optimal overall solution to the problem corresponds to the optimal solution to its subproblems, then the problem can be solved using a greedy approach. This property is called optimal substructure.

https://www.programiz.com/dsa/greedy-algorithm

220
Q

What are the pros and cons of the greedy approach?

A

Advantages of Greedy Approach

The algorithm is easier to describe.
This algorithm can perform better than other algorithms (but, not in all cases).

Drawback of Greedy Approach

As mentioned earlier, the greedy algorithm doesn’t always produce the optimal solution. This is the major disadvantage of the algorithm

For example, suppose we want to find the longest path in the graph below from root to leaf. Let’s use the greedy algorithm here.

https://www.programiz.com/dsa/greedy-algorithm

221
Q

What is the Ford-Fulkerson algorithm?

A

Ford-Fulkerson algorithm is a greedy approach for calculating the maximum possible flow in a network or a graph.

https://www.programiz.com/dsa/ford-fulkerson-algorithm

222
Q

What is “flow network”?

A

A term, flow network, is used to describe a network of vertices and edges with a source (S) and a sink (T). Each vertex, except S and T, can receive and send an equal amount of stuff through it. S can only send and T can only receive stuff.

https://www.programiz.com/dsa/ford-fulkerson-algorithm

223
Q

What is a flow network augmenting path?

A

It is the path available in a flow network.

https://www.programiz.com/dsa/ford-fulkerson-algorithm

224
Q

What is flow network residual graph?

A

It represents the flow network that has additional possible flow.

https://www.programiz.com/dsa/ford-fulkerson-algorithm

225
Q

What is flow network residual capacity?

A

It is the capacity of the edge after subtracting the flow from the maximum capacity.

https://www.programiz.com/dsa/ford-fulkerson-algorithm

226
Q

What are Ford-Fulkerson’s applications?

A

Water distribution pipeline
Bipartite matching problem
Circulation with demands

https://www.programiz.com/dsa/ford-fulkerson-algorithm

227
Q

What is Dijkstra’s algorithm?

A

Dijkstra’s algorithm allows us to find the shortest path between any two vertices of a graph.

It differs from the minimum spanning tree because the shortest distance between two vertices might not include all the vertices of the graph.

https://www.programiz.com/dsa/dijkstra-algorithm

228
Q

What is the complexity of Dijkstra’s algorithm?

A

Time Complexity: O(E Log V)

where, E is the number of edges and V is the number of vertices.

Space Complexity: O(V)

https://www.programiz.com/dsa/dijkstra-algorithm

229
Q

What are the applications of Dijkstra’s algorithm?

A

To find the shortest path
In social networking applications
In a telephone network
To find the locations in the map

https://www.programiz.com/dsa/dijkstra-algorithm

230
Q

What is Kruskal’s algorithm?

A

Kruskal’s algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges of that graph which

form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph

https://www.programiz.com/dsa/kruskal-algorithm

231
Q

Compare Kruskal’s and Prim’s algorithms.

A

Prim’s algorithm is another popular minimum spanning tree algorithm that uses a different logic to find the MST of a graph. Instead of starting from an edge, Prim’s algorithm starts from a vertex and keeps adding lowest-weight edges which aren’t in the tree, until all vertices have been covered.

https://www.programiz.com/dsa/kruskal-algorithm

232
Q

What is Kruskal’s algorithm’s complexity?

A

The time complexity Of Kruskal’s Algorithm is: O(E log E).

https://www.programiz.com/dsa/kruskal-algorithm

233
Q

What are the applications of Kruskal’s algorithms?

A

In order to layout electrical wiring
In computer network (LAN connection)

https://www.programiz.com/dsa/kruskal-algorithm

234
Q

What is Prim’s algorithm?

A

Prim’s algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges of that graph which

form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph

https://www.programiz.com/dsa/prim-algorithm

235
Q

Compare Prim’s vs Kruskal’s algorithm.

A

Kruskal’s algorithm is another popular minimum spanning tree algorithm that uses a different logic to find the MST of a graph. Instead of starting from a vertex, Kruskal’s algorithm sorts all the edges from low weight to high and keeps adding the lowest edges, ignoring those edges that create a cycle.

https://www.programiz.com/dsa/prim-algorithm

236
Q

What is the complexity of Prim’s algorithm?

A

The time complexity of Prim’s algorithm is O(E log V).

https://www.programiz.com/dsa/prim-algorithm

237
Q

What are the applications of Prim’s algorithm?

A

Laying cables of electrical wiring
In network designed
To make protocols in network cycles

https://www.programiz.com/dsa/prim-algorithm

238
Q

What is Huffman coding?

A

Huffman Coding is a technique of compressing data to reduce its size without losing any of the details. It was first developed by David Huffman.

Huffman Coding is generally useful to compress the data in which there are frequently occurring characters.

https://www.programiz.com/dsa/huffman-coding

239
Q

What is the complexity of Huffman coding?

A

The time complexity for encoding each unique character based on its frequency is O(nlog n).

Extracting minimum frequency from the priority queue takes place 2*(n-1) times and its complexity is O(log n). Thus the overall complexity is O(nlog n).

https://www.programiz.com/dsa/huffman-coding

240
Q

What are the applications of Huffman coding?

A

Huffman coding is used in conventional compression formats like GZIP, BZIP2, PKZIP, etc.
For text and fax transmissions.

https://www.programiz.com/dsa/huffman-coding

241
Q

What is Dynamic Programming?

A

Dynamic Programming is a technique in computer programming that helps to efficiently solve a class of problems that have overlapping subproblems and optimal substructure property.

If any problem can be divided into subproblems, which in turn are divided into smaller subproblems, and if there are overlapping among these subproblems, then the solutions to these subproblems can be saved for future reference. In this way, efficiency of the CPU can be enhanced. This method of solving a solution is referred to as dynamic programming.

Such problems involve repeatedly calculating the value of the same subproblems to find the optimum solution.

https://www.programiz.com/dsa/dynamic-programming

242
Q

What is Memoization?

A

Dynamic programming works by storing the result of subproblems so that when their solutions are required, they are at hand and we do not need to recalculate them.

This technique of storing the value of subproblems is called memoization. By saving the values in the array, we save time for computations of sub-problems we have already come across.

Dynamic programming by memoization is a top-down approach to dynamic programming. By reversing the direction in which the algorithm works i.e. by starting from the base case and working towards the solution, we can also implement dynamic programming in a bottom-up manner.

https://www.programiz.com/dsa/dynamic-programming

243
Q

Compare recursion vs dynamic programming.

A

Dynamic programming is mostly applied to recursive algorithms. This is not a coincidence, most optimization problems require recursion and dynamic programming is used for optimization.

But not all problems that use recursion can use Dynamic Programming. Unless there is a presence of overlapping subproblems like in the fibonacci sequence problem, a recursion can only reach the solution using a divide and conquer approach.

That is the reason why a recursive algorithm like Merge Sort cannot use Dynamic Programming, because the subproblems are not overlapping in any way.

https://www.programiz.com/dsa/dynamic-programming

244
Q

Compare greedy algorithms vs dynamic programming.

A

Greedy Algorithms are similar to dynamic programming in the sense that they are both tools for optimization.

However, greedy algorithms look for locally optimum solutions or in other words, a greedy choice, in the hopes of finding a global optimum. Hence greedy algorithms can make a guess that looks optimum at the time but becomes costly down the line and do not guarantee a globally optimum.

Dynamic programming, on the other hand, finds the optimal solution to subproblems and then makes an informed choice to combine the results of those subproblems to find the most optimum solution.

https://www.programiz.com/dsa/dynamic-programming

245
Q

What are some types of dynamic programming?

A

Longest Common Subsequence
Floyd-Warshall Algorithm

https://www.programiz.com/dsa/dynamic-programming

246
Q

What is the Floyd-Warshall algorithm?

A

Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the pairs of vertices in a weighted graph. This algorithm works for both the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles (where the sum of the edges in a cycle is negative).

A weighted graph is a graph in which each edge has a numerical value associated with it.

Floyd-Warhshall algorithm is also called as Floyd’s algorithm, Roy-Floyd algorithm, Roy-Warshall algorithm, or WFI algorithm.

This algorithm follows the dynamic programming approach to find the shortest paths.

https://www.programiz.com/dsa/floyd-warshall-algorithm

247
Q

What is the complexity of the Floyd-Warshall algorithm?

A

Time Complexity

There are three loops. Each loop has constant complexities. So, the time complexity of the Floyd-Warshall algorithm is O(n3).
Space Complexity

The space complexity of the Floyd-Warshall algorithm is O(n2).

https://www.programiz.com/dsa/floyd-warshall-algorithm

248
Q

What are the applications of the Floyd-Warshall algorithm?

A

To find the shortest path is a directed graph
To find the transitive closure of directed graphs
To find the Inversion of real matrices
For testing whether an undirected graph is bipartite

https://www.programiz.com/dsa/floyd-warshall-algorithm

249
Q

What is the longest common subsequence?

A

The longest common subsequence (LCS) is defined as the longest subsequence that is common to all the given sequences, provided that the elements of the subsequence are not required to occupy consecutive positions within the original sequences.

If S1 and S2 are the two given sequences then, Z is the common subsequence of S1 and S2 if Z is a subsequence of both S1 and S2. Furthermore, Z must be a strictly increasing sequence of the indices of both S1 and S2.

In a strictly increasing sequence, the indices of the elements chosen from the original sequences must be in ascending order in Z.

If

S1 = {B, C, D, A, A, C, D}

Then, {A, D, B} cannot be a subsequence of S1 as the order of the elements is not the same (ie. not strictly increasing sequence).

https://www.programiz.com/dsa/longest-common-subsequence

250
Q

How is a dynamic programming algorithm more efficient than the recursive algorithm while solving an LCS problem?

A

The method of dynamic programming reduces the number of function calls. It stores the result of each function call so that it can be used in future calls without the need for redundant calls.

In the above dynamic algorithm, the results obtained from each comparison between elements of X and the elements of Y are stored in a table so that they can be used in future computations.

So, the time taken by a dynamic approach is the time taken to fill the table (ie. O(mn)). Whereas, the recursion algorithm has the complexity of 2max(m, n).

https://www.programiz.com/dsa/longest-common-subsequence

251
Q

What are the applications of longest common subsequence?

A

in compressing genome resequencing data
to authenticate users within their mobile phone through in-air signatures

https://www.programiz.com/dsa/longest-common-subsequence

252
Q

What is a backtracking algorithm?

A

A backtracking algorithm is a problem-solving algorithm that uses a brute force approach for finding the desired output.

The Brute force approach tries out all the possible solutions and chooses the desired/best solutions.

The term backtracking suggests that if the current solution is not suitable, then backtrack and try other solutions. Thus, recursion is used in this approach.

This approach is used to solve problems that have multiple solutions. If you want an optimal solution, you must go for dynamic programming.

https://www.programiz.com/dsa/backtracking-algorithm

253
Q

What is a state space tree?

A

A space state tree is a tree representing all the possible states (solution or nonsolution) of the problem from the root as an initial state to the leaf as a terminal state.

https://www.programiz.com/dsa/backtracking-algorithm

254
Q

What are the applications of backtracking algorithms?

A

To find all Hamiltonian Paths present in a graph.
To solve the N Queen problem.
Maze solving problem.
The Knight’s tour problem.

https://www.programiz.com/dsa/backtracking-algorithm

255
Q

What is the Rabin-Karp algorithm?

A

Rabin-Karp algorithm is an algorithm used for searching/matching patterns in the text using a hash function. Unlike Naive string matching algorithm, it does not travel through every character in the initial phase rather it filters the characters that do not match and then performs the comparison.

https://www.programiz.com/dsa/rabin-karp-algorithm

256
Q

What are the limitations of the Rabin-Karp algorithm?

A

Spurious Hit

When the hash value of the pattern matches with the hash value of a window of the text but the window is not the actual pattern then it is called a spurious hit.

Spurious hit increases the time complexity of the algorithm. In order to minimize spurious hit, we use modulus. It greatly reduces the spurious hit.

https://www.programiz.com/dsa/rabin-karp-algorithm

257
Q

What is the complexity of the Rabin-Karp algorithm?

A

The average case and best case complexity of Rabin-Karp algorithm is O(m + n) and the worst case complexity is O(mn).

The worst-case complexity occurs when spurious hits occur a number for all the windows.

https://www.programiz.com/dsa/rabin-karp-algorithm

258
Q

What are the applications of the Rabin-Karp algorithm?

A

For pattern matching
For searching string in a bigger text

https://www.programiz.com/dsa/rabin-karp-algorithm