Machine Learning - Unsupervised Flashcards Preview

Machine Learning > Machine Learning - Unsupervised > Flashcards

Flashcards in Machine Learning - Unsupervised Deck (32)
Loading flashcards...
1

Anomaly Detection

...

2

Cluster Analysis

Methods to assign a set of objects into groups. These groups are called clusters and objects in a cluster are more similar to each other than to those in other clusters. Well known algorithms are hierarchical clustering, k-means, fuzzy clustering, supervised clustering.

3

Clustering: Canopy

A preprocessing step for K-means or Hierarchical clustering. Intended to speed up clustering operations on large data sets. Begin with the set of data points to be clustered. Remove a point from the set, beginning a new 'canopy'. For each point left in the set, assign it to the new canopy if the distance less than the loose distance. If the distance of the point is additionally less than the tight distance, remove it from the original set. Repeat from step 2 until there are no more data points in the set to cluster. These relatively cheaply clustered canopies can be sub-clustered using a more expensive but accurate algorithm

4

Clustering: Definition

Methods to assign a set of objects into groups. Objects in a cluster are more similar to each other than to those in other clusters. Enables understanding of the differences as well as the similarities within the data.

5

Cluster Analysis: Distance Measures Between Clusters

In hierarchical clustering: 1. Average linkage: It is the average distance between all the points in two clusters. 2. Single linkage: It is the distance between nearest points in two clusters 3. Complete linkage: It is the distance between farthest points in two clusters.

6

Cluster Analysis: Distance Metrics Between Items

1. Euclidean distance: The geometric distance between objects in the multidimensional space. The shortest path between two objects. It is used to obtain sphere-shaped clusters. 2. City block (Manhattan) distance. It corresponds to the sum of distances along each dimension and is less sensitive to outliers. It is used to obtain diamond-shaped clusters. 3. Cosine similarity measure. It is calculated by measuring the cosine of angle between two objects. It is used mostly to compute the similarity between two sets of transaction data.

7

Clustering: Flavors

Ward hierarchical clustering, k-means, Gaussian Mixture Models, spectral, Birch, Affinity propogation, fuzzy clustering

8

Cluster Analysis: Gaussian Mixture Models (GMM)

An unsupervised learning technique for clustering that generates a mixture of clusters from the full data set using a Gaussian (normal) data distribution model for each cluster. The GMM's output is a set of cluster attributes (mean, variance, and centroid) for each cluster, thereby producing a set of characterization metadata that serves as a compact descriptive model of the full data collection.

9

Cluster Analysis: Hierarchical Clustering

...

10

Clustering: K-Means

For a given K, finds K clusters by iteratively moving cluster centers to the cluster centers of gravity and adjusting the cluster set assignments.

11

Cluster Analysis: K-Means Overview

What: K-means is one of the most widely used clustering techniques because of its simplicity and speed. It partitions the data into a user specified number of clusters, k. Why: Simplicity, speed. It is fast for large data sets, which are common in segmentation.

12

Cluster Analysis: K-Means: 4 Key Steps

1: Initialization of k centroids; 2: Data points assigned to nearest centroid; 3: Relocation of each mean to the center of it's points; 4: Repeat step 2 and 3 until assignments no longer change

13

Cluster Analysis: K-Means: Cautions

Clusters may converge to a local minimum. Due to this issue, the clusters that are obtained might not be the right ones. To avoid this, it might be helpful to run the algorithm with different initial cluster centroids and compare the results.

14

Cluster Analysis: K-Means: How

1. Initialization: The algorithm is initialized by picking the initial k cluster representatives or "centroids". These initial seeds can be sampled at random from the dataset, or by taking the results of clustering a small subset of the data;2. Data Assignment. Each data point is assigned to its closest centroid, with ties broken arbitrarily. This results in a partitioning of the data.;3. Recompute and reset the location of the "means". Each cluster representative is relocated to the center (mean) of all data points assigned to it.;Now repeat step 3 and 4 until the convergence criterion is met (e.g., the assignment of objects to clusters no longer changes over multiple iterations) or maximum iteration is reached.

15

Cluster Analysis: K-Means: Scaling Options

Note that each iteration needs N × k comparisons, which determines the time complexity of one iteration. The number of iterations required for convergence varies and may depend on N, but as a first cut, this algorithm can be considered linear in the dataset size. The k-means algorithm can take advantage of data parallelism. When the data objects are distributed to each processor, step 3 can be parallelized easily by doing the assignment of each object into the nearest cluster in parallel.

16

Clustering: Preference Bias

Prefers data that is in groupings given some form of distance (Euclidean, Manhattan, or others)

17

Clustering: Restriction Bias

No restriction

18

Clustering: Type

Unsupervised learning, class type clusterning

19

Gaussian Mixture Models

...

20

Hidden Markov Models

...

21

Hidden Markov Models: Cons

...

22

Hidden Markov Models: Definition

Markov models are a kind of probabilistic model often used in language modeling. The observations are assumed to follow a Markov chain, where each observation is independent of all past observations given the previous one.

23

Hidden Markov Models: Example Applications

Temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, and bioinformatics.

24

Hidden Markov Models: Flavors

Markov chains, Hidden Markov Models

25

Hidden Markov Models: Preference Bias

Generally works well for system information where the Markov assumption holds

26

Hidden Markov Models: Pros

Markov chains are useful models of many natural processes and the basis of powerful techniques in probabilistic inference and randomized algorithms.

27

Hidden Markov Models: Restriction Bias

Prefers time series data and memoryless information

28

Hidden Markov Models: Type

Supervised or unsupervised with class: Markovian

29

Markov Models

Markov models are a kind of probabilistic model often used in language modeling. The observations are assumed to follow a Markov chain, where each observation is independent of all past observations given the previous one. In a Markov chain, a system transitions stochastically from one state to another. It is a memoryless process, in the sense that the distribution over the next state depends only on the current state, and not on the state at any past time. Markov chains are useful models of many natural processes and the basis of powerful techniques in probabilistic inference and randomized algorithms. A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or ?1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6.

30

PCA

...