First Video Flashcards

the data camp notes

1
Q

What is deep learning

A

Deep learning are the algorithms which account for the hidden interaction between variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does deep learning work

A

Deep learning assigns weights to the variables and calculates the effects of each of these weights of the variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the interaction in a neural network

A

There are inputs and outputs and any thing that is not these two belongs to the hidden layer. The hidden layer is not something that is observable but is an amalgamation of data calculated from the input. Each data point in a hidden layer is called a node.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a node

A

Each data point in a hidden layer is called a node. It represents an aggregation of input data. The data is aggregated in terms of weights. Eg: If input is a,b,c. Then one node might be 1a+4b-3c and another might be 3a-2b+8c and so on. Where the 1,4,-3 in the first node calculation are the weights.Thus, we can say that more the nodes, more successfully can the interaction be captured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Codes for forward propagation

A

input_data = np.array([2,3])
weights={‘node_1’ : np.array([1,1]),
‘node_2’: np.array([1,-1])
‘output’ : np.array([2,-1]) }
node_1_value=(input_dataweights[‘node_1’].sum()
node_2_value=(input_data
weights[‘node_2’].sum()
NOTE: {} means dictionary is formed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are activation functions

A

For neural networks to achieve there maximum predictive power we must apply an activation function. Activation functions allow the model to capture the non linearity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is ReLU

A

It is the industry standard activation function. it stands for Rectified Linear Activation.
ReLU(x) = { 0 if x <0,
x if x >=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Code for forward propagation with activation function tanh

A

input_data = np.array([2,3])
weights={‘node_1’ : np.array([1,1]),
‘node_2’: np.array([1,-1])
‘output’ : np.array([2,-1]) }
node_1_input=(input_dataweights[‘node_1’].sum()
node_1_output=np.tanh(node_1_input)
node_2_value=(input_data
weights[‘node_2’].sum()
node_2_output=np.tanh(node_2_input)
hidden_layer_output=np.array([node_1_output,node_2_output)
output= (hidden_layer_output * weights[‘output’]).sum()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does ReLU work for multiple hidden layers

A

3 2 26
4
4
5 -5 0
from the given example we see that for input 3 the resulting hidden layer value is 26 ReLU(32+43) but for input 5 the hidden layer node is 0 (ReLU(54+5-5) =-12)
It is zero because after applying the ReLU activation function you get 0 for value less than equal to zero and so on for the progressive nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Representation Learning

A
  1. Deep network internally builds representations of patterns in the data
  2. Partially replaces the need for feature engineering
  3. Subsequent layers build increasingly sophisticated representations of the raw data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why use Deep Learning

A
  1. The modeler does not need to specify the interactions between the inputs
  2. When training the neural network, the network gets weights that help it find relevant patterns to make better predictions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to loop input data in hidden networks

A

from sklearn.metrics import mean_squared_error

#Create model_output_0 
model_output_0 = []
# Create model_output_1
model_output_1 = []
# Loop over input_data
for row in input_data:
    # Append prediction to model_output_0
    model_output_0.append(predict_with_network(row, weights_0))
    # Append prediction to model_output_1
    model_output_1.append(predict_with_network(row, weights_1))
# Calculate the mean squared error for model_output_0: mse_0
mse_0 = mean_squared_error(target_actuals, model_output_0)
# Calculate the mean squared error for model_output_1: mse_1
mse_1 = mean_squared_error(target_actuals, model_output_1)
# Print mse_0 and mse_1
print("Mean squared error with weights_0: %f" %mse_0)
print("Mean squared error with weights_1: %f" %mse_1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to use gradient descent for neural network

A

Gradient descent is used in optimizing and calculating the weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly