SOM Flashcards

(16 cards)

1
Q

What is the primary purpose of a Self-Organizing Map (SOM)?

A

A Self-Organizing Map (SOM) is primarily used for:

  • Clustering
  • Dimensionality reduction
  • Pattern recognition
  • Visualization

SOM is an unsupervised neural network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the architecture of a Self-Organizing Map.

A

A Self-Organizing Map has a two-layer network consisting of:

  • Input Layer: Contains input vector ( x = [x_1, x_2, …, x_n] )
  • Output Layer: A grid of neurons/nodes representing classes or clusters

Each input connects to all neurons in the output layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What defines the neighborhood relation in a SOM?

A

The neighborhood relation is defined by ( N_j(t) ): the set of nodes within a distance ( D(t) ) from node j.

This relation is crucial for the cooperation among nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What occurs when ( D = 0 ) in a SOM?

A

When ( D = 0 ):

  • There is no cooperation between nodes
  • Only competition occurs
  • Results in a random map

This means that nodes do not influence each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What happens when ( D > 0 ) in a SOM?

A

When ( D > 0 ):

  • Nodes cooperate with neighbors
  • Compete with distant nodes
  • Produces a topology-preserving map

Similar input vectors map to nearby neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the Mexican Hat function in the context of SOM.

A

The Mexican Hat function models cooperation and competition among nodes:

  • Central node has maximum excitation
  • Nearby nodes are slightly excited
  • Distant nodes are inhibited

This function helps in defining the neighborhood effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the first step in the conventional SOM learning algorithm?

A

The first step is to initialize all weight vectors ( w_j ) randomly (e.g., small random values).

This sets the stage for learning from the input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you identify the Best Matching Unit (BMU) in SOM?

A

The BMU is identified by computing the distance from ( x(t) ) to all neurons using Euclidean distance:

  • ( c = argmin_j | x(t) - w_j(t) | )

The neuron with the smallest distance is considered the BMU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the formula for updating the weights of the BMU and its neighbors?

A

The weight update formula is:

  • ( w_j(t+1) = w_j(t) + alpha(t) cdot h_{cj}(t) cdot [x(t) - w_j(t)] )

( alpha(t) ) is the learning rate, and ( h_{cj}(t) ) is the neighborhood function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the challenges with the conventional SOM regarding the learning rate?

A

The fixed learning rate ( alpha(t) ) presents challenges:

  • If too small → slow convergence but low error
  • If too large → fast convergence but high quantization error (QE)

Balancing convergence speed and error performance is crucial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the key idea of the proposed Adaptive SOM algorithm?

A

The key idea is to use a variable learning rate ( alpha(t) ) that adapts based on the eigenvalues of the autocorrelation matrix ( R(t) ) of inputs.

This aims to overcome the limitations of conventional SOM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does the adaptive learning rate work in the proposed SOM?

A

If eigenvalues are high:

  • Use a lower ( alpha(t) ) to maintain low QE

If eigenvalues are low:

  • Use a higher ( alpha(t) ) to improve convergence rate

This dynamic adjustment helps in performance optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the formula for the adaptive learning rate in the proposed SOM?

A

The adaptive learning rate formula is:

  • ( alpha(t) = rac{lambda}{1 - eta^t} )

( lambda ) is the base learning factor, and ( eta ) is the decay rate (0 < ( eta ) < 1).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What improvements does the Robust Adaptive SOM show compared to traditional algorithms?

A

The Robust Adaptive SOM outperforms traditional algorithms in:

  • Convergence Speed: Faster
  • Quantization Error (QE): Lower
  • Topology Error (TE): Lower
  • Recognition Accuracy: Higher
  • Iterations to Converge: Fewer

These metrics indicate significant performance enhancements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What datasets were used to evaluate the performance of the Robust Adaptive SOM?

A

The performance was evaluated using 8 benchmark datasets from UCI and KEEL repositories.

These datasets provide a standard for comparison.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Summarize the main advantages of the Robust Adaptive SOM.

A

The Robust Adaptive SOM:

  • Clusters data while preserving topological relationships
  • Addresses fixed learning rate issues
  • Improves learning rate adaptively using eigenvalues
  • Results in enhanced performance metrics

This approach offers a significant advancement over conventional methods.