t-Distributed Stochastic Neighbors Embedding (t-SNE) Flashcards

1
Q

t-Distributed Stochastic Neighbor Embedding (t-SNE)

A

t-Distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton. It is a nonlinear dimensionality reduction technique that is particularly well-suited for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. In conclusion, t-SNE is a powerful tool for visualizing high-dimensional data, but its use comes with considerations around computational cost and the interpretability of the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Definition
A

t-SNE is a probabilistic technique used for high-dimensional data visualization. It aims to preserve the local structure of the data in a lower-dimensional space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. Principle
A

t-SNE first calculates the probability of similarity of points in high-dimensional space and then calculates the similarity of those points in the corresponding low-dimensional space. The technique then tries to minimize the difference between these conditional probabilities (or similarities) for a perfect representation in lower dimensions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Computation
A

t-SNE uses gradient descent to minimize the divergence between two distributions: a distribution that measures pairwise similarities of the input objects and a distribution that measures pairwise similarities of the corresponding low-dimensional points in the embedding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. Non-Linearity
A

Unlike other dimensionality reduction techniques like PCA or LDA, t-SNE employs a non-linear approach to mapping high-dimensional data into a lower-dimensional space, which can capture much more complex patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Visualization
A

t-SNE is best known for creating meaningful visualizations of high-dimensional data. It’s frequently used for exploratory data analysis and to visualize clusters in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Assumptions
A

t-SNE assumes that the local structure is more important than the global structure, i.e., the algorithm cares more about preserving the distances between nearby points than between distant points. This assumption can be a limitation depending on the use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Parameters
A

t-SNE has a few key parameters, including the number of output dimensions (usually 2 for visualization purposes), the perplexity (which balances attention between local and global aspects of the data), and the learning rate for gradient descent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Benefits
A

t-SNE can reveal structure at many scales on a single map, it is particularly good at creating a single map that reveals structure at many different scales.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Limitations
A

Despite its benefits, t-SNE has several limitations. It is known to be computationally intensive and can take a long time on large datasets. Also, it is not guaranteed to give the same output on different runs of the algorithm, due to the randomness in the initial configuration and the optimization process. Finally, t-SNE can sometimes produce “clusters” in the data when there are none.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. Applications
A

t-SNE has been used in a wide variety of applications including visualization of high-dimensional datasets in machine learning, data mining, image processing, information retrieval, bioinformatics, and computer vision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly