01 - Introduction Flashcards

1
Q

What is Deep Learning

A

Deep learning simulates our brain. It helps computers to learn from experience and understand the world as a hierarchy of concepts (building complicated concepts from multiple simpler ones) -> The depth of those concpet graphs gives it the name

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

AI, ML, Representation learning, DL

A

Artificial Intelligence
- Concept of human like machines
- Can be simple rule-based systems with written code for every action

Machine Learning
- Focus on data
- The features are hand-designed but the model is not. It is learned from data by the machine through pattern recognotion
- Approach to enable AI systems to operate in complicated real world envirnments

Representation Learning
- Type of machine learning. DL is a subcategory of it.
- It both learns the representation (features) and how to produce the correct output
- Example: Shallow Autoencoders

Deep Learning
- Type of ML (& representation learning)
- achieves great power and flexibility by representing the world as a nested
hierarchy of concepts, with each concept defined in relation to simpler concepts, and
more abstract representations computed in terms of less abstract ones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are visible and hidden layers?

A

Visible Layer:
- input layer, contains variables that are observable

Hidden Layers:
- values not given by the data, abstract features
- The model must determine which consepts are usefull for explaining the relashionships in the observed data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the two ways of measuring depth in neural networks?

A
  1. The first view is based on the number of sequential instructions that must be executed to evaluate the architecture. We can think of this as the length of the longest path through a flow chart that describes how to compute each of the model’s outputs given its inputs.
  2. Another approach, used by deep probabilistic models, regards the depth of a model as being not the depth of the computational graph but the depth of the graph describing how concepts are related to each other.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are S&C cells?

A

70 years ago fundamental discoveries about the cats visual system were made by Hubel & Wiesel. The brain is modularized and has a hierarchial structure.

Simple (neurons) → Complex (a number of simple cells) →Hypercomplex cells

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain Marrs theory on the visual cortex

A

Marr suggested that visual processing passes through a series of stages, each corresponding to a different representation, from retinal image to 3D model. So the idea was that S and C cells are essentialt for computing primal sketches as blobs, lines, edges etc. and that here must be a neuron corresponding to each “smallest” part of the whole object. Eg. each simple neuron corresponding to a finger, one neuron aggregating all finger neurons as a hand etc. In that way the whole 3D model is computed in a hierarchial way with columetric primitives.

Of course not that simple, so the symbolic approach does not work well. One reason for it not working is the need for knowing the coordinate transformation between different frames.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is KNN Classification?

A
  • Instance-based learning.
  • Does not construct a general internal model but simply stores instances of the training data.
  • Leanring is based on k nearest neighbors of each query point
  • k is initialized by user

Training: store images with labels
Testing: for each datapoint (image), find the k nearest training examples and decide on a label by majority vote.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

KNN probabilities

A

x - datapoint
N - total nr of datapoints
C - class
k - nearest datapoints to x

probability of class k given point x → p(C_k|x) (this is our goal)

Bayes: p(C_k|x) = (p(x|C_k)p(C_k))/p(x)

The unconditional probability of an object of class Ck → p(C_k)=N_k/N

The unconditional probability of x (think of probabilities as density)→ p(x)=k/N

Final solution using Bayes’ theorem: p(C_k|x) = (p(C_k|x)p(C_k))/p(x)= K_k/K

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

wavs, MFCC, Chroma, Spectral Contrast

A

wav: waveform audio file format: stores audio data in a pulse-code format (64k long)

mfcc: MEl-Frequency Cepstral Coefficients, represent the short-term power spectrum of a sound signal. 128 long

chroma features: represent the 12 different pitch classes as features, capturing tonal content

Spectral contrast measures the difference in amplitude between peaks and valleys in the spectrum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly