SOM Flashcards
(16 cards)
What is the primary purpose of a Self-Organizing Map (SOM)?
A Self-Organizing Map (SOM) is primarily used for:
- Clustering
- Dimensionality reduction
- Pattern recognition
- Visualization
SOM is an unsupervised neural network.
Describe the architecture of a Self-Organizing Map.
A Self-Organizing Map has a two-layer network consisting of:
- Input Layer: Contains input vector ( x = [x_1, x_2, …, x_n] )
- Output Layer: A grid of neurons/nodes representing classes or clusters
Each input connects to all neurons in the output layer.
What defines the neighborhood relation in a SOM?
The neighborhood relation is defined by ( N_j(t) ): the set of nodes within a distance ( D(t) ) from node j.
This relation is crucial for the cooperation among nodes.
What occurs when ( D = 0 ) in a SOM?
When ( D = 0 ):
- There is no cooperation between nodes
- Only competition occurs
- Results in a random map
This means that nodes do not influence each other.
What happens when ( D > 0 ) in a SOM?
When ( D > 0 ):
- Nodes cooperate with neighbors
- Compete with distant nodes
- Produces a topology-preserving map
Similar input vectors map to nearby neurons.
Explain the Mexican Hat function in the context of SOM.
The Mexican Hat function models cooperation and competition among nodes:
- Central node has maximum excitation
- Nearby nodes are slightly excited
- Distant nodes are inhibited
This function helps in defining the neighborhood effect.
What is the first step in the conventional SOM learning algorithm?
The first step is to initialize all weight vectors ( w_j ) randomly (e.g., small random values).
This sets the stage for learning from the input data.
How do you identify the Best Matching Unit (BMU) in SOM?
The BMU is identified by computing the distance from ( x(t) ) to all neurons using Euclidean distance:
- ( c = argmin_j | x(t) - w_j(t) | )
The neuron with the smallest distance is considered the BMU.
What is the formula for updating the weights of the BMU and its neighbors?
The weight update formula is:
- ( w_j(t+1) = w_j(t) + alpha(t) cdot h_{cj}(t) cdot [x(t) - w_j(t)] )
( alpha(t) ) is the learning rate, and ( h_{cj}(t) ) is the neighborhood function.
What are the challenges with the conventional SOM regarding the learning rate?
The fixed learning rate ( alpha(t) ) presents challenges:
- If too small → slow convergence but low error
- If too large → fast convergence but high quantization error (QE)
Balancing convergence speed and error performance is crucial.
What is the key idea of the proposed Adaptive SOM algorithm?
The key idea is to use a variable learning rate ( alpha(t) ) that adapts based on the eigenvalues of the autocorrelation matrix ( R(t) ) of inputs.
This aims to overcome the limitations of conventional SOM.
How does the adaptive learning rate work in the proposed SOM?
If eigenvalues are high:
- Use a lower ( alpha(t) ) to maintain low QE
If eigenvalues are low:
- Use a higher ( alpha(t) ) to improve convergence rate
This dynamic adjustment helps in performance optimization.
What is the formula for the adaptive learning rate in the proposed SOM?
The adaptive learning rate formula is:
- ( alpha(t) = rac{lambda}{1 - eta^t} )
( lambda ) is the base learning factor, and ( eta ) is the decay rate (0 < ( eta ) < 1).
What improvements does the Robust Adaptive SOM show compared to traditional algorithms?
The Robust Adaptive SOM outperforms traditional algorithms in:
- Convergence Speed: Faster
- Quantization Error (QE): Lower
- Topology Error (TE): Lower
- Recognition Accuracy: Higher
- Iterations to Converge: Fewer
These metrics indicate significant performance enhancements.
What datasets were used to evaluate the performance of the Robust Adaptive SOM?
The performance was evaluated using 8 benchmark datasets from UCI and KEEL repositories.
These datasets provide a standard for comparison.
Summarize the main advantages of the Robust Adaptive SOM.
The Robust Adaptive SOM:
- Clusters data while preserving topological relationships
- Addresses fixed learning rate issues
- Improves learning rate adaptively using eigenvalues
- Results in enhanced performance metrics
This approach offers a significant advancement over conventional methods.