#12 -Machine learing Flashcards
(23 cards)
Dropout
A regularization technique where neurons are randomly turned off during training to reduce overfitting. Analogy: like in a classroom where random students are excused each class to ensure everyone learns equally.
ReLU (Rectified Linear Unit)
An activation function that sets all negative values to zero and keeps positive values unchanged: ReLU(x) = max(0, x). It helps prevent the vanishing gradient problem.
Leaky ReLU
A variation of ReLU where negative values are allowed but scaled by a small factor (e.g., 0.01): LeakyReLU(x) = x for x > 0, a*x for x <= 0. Helps prevent dead neurons.
Batch Normalization
A technique to normalize layer activations to stabilize and speed up training. Analogy: like revising notes after each class to maintain consistent difficulty.
Sigmoid
An activation function for binary classification problems, outputting values between 0 and 1: sigmoid(x) = 1 / (1 + e^(-x)).
Binary Cross-Entropy (BCE)
A loss function for binary classification that measures the difference between predicted probability and true label. Formula: L = - (ylog(p) + (1-y)log(1-p)).
Adam Optimizer
A popular optimizer combining the benefits of AdaGrad and RMSProp, adjusting the learning rate for each parameter.
MaxPooling
A downsampling operation selecting the maximum value from each window (usually 2×2). It reduces resolution and highlights key features.
Bilinear Upsampling
A method to increase image resolution by bilinear interpolation, averaging neighboring pixels. Used in decoders like in UNet.
Conv2D
A convolution operation in neural networks for 2D images, extracting features like edges and patterns. Parameters include number of filters, kernel size, padding, and stride.
What is the encoder in U-Net?
Part of the model responsible for feature extraction — reduces image size using e.g. max pooling.
What is the decoder in U-Net?
Part of the model that restores resolution via upsampling, often using transposed convolution.
What are skip connections?
Links between encoder and decoder layers that help preserve spatial details in the image.
Is max pooling the only way to downsample?
No. You can also use strided convolutions or attention-based pooling.
What can replace transposed convolution in the decoder?
You can use upsample + convolution (e.g. nn.Upsample + Conv2D) or pixel shuffle.
How can skip connections be improved?
By adding attention (e.g. SE block, CBAM) or using residual connections instead of concat.
What are popular U-Net variants?
UNet++, Attention UNet, ResUNet, DeepLab, PSPNet, HRNet.
Why perform ablation studies?
To determine which components of the model architecture improve performance.
What is UNet++?
An advanced U-Net with denser skip connections between encoder and decoder blocks.
What is Attention UNet?
A U-Net that incorporates attention modules to highlight important regions in the input.
czym jest skip-connections?
Ich rolą jest łączenie szczegółowych informacji z warstw płytkich (wysoka rozdzielczość) z informacjami semantycznymi z warstw głębokich (więcej kontekstu).
Kiedy trzeba użyć if __name__ == “__main__”:
Na Windowsie (i czasem w Jupyterze) procesy uruchamiane przez DataLoader (z num_workers>0) muszą mieć punkt wejścia w __main__, by uniknąć rekurencyjnego uruchamiania skryptu przez multiprocessing.