chatGPT questions exam1 Flashcards
What is Batch Normalization and its purpose in neural networks?
Batch Normalization (BN) is a technique that normalizes the outputs of a previous activation layer by subtracting the batch mean and dividing by the batch standard deviation, including learnable parameters for scale and shift to maintain the network’s representational capacity. It improves speed, performance, and stability.
What does a convolution operation involve in CNNs?
In CNNs, convolution involves sliding a filter over the input feature map and computing the dot product at every position to extract features
Why is padding used in convolutional neural networks?
Padding is used to adjust the size of the feature maps so that the output feature map can maintain the same size as the input, allowing for the application of the filter at the border of the input.
What is Connected Component Analysis used for in image processing?
Connected Component Analysis labels and groups pixels into components based on pixel connectivity for segmentation tasks, aiding in object detection and segmentation.
What is Categorical Cross-Entropy and its role in classification problems?
Categorical Cross-Entropy is a loss function measuring the dissimilarity between true distributions and predicted probabilities, penalizing incorrect predictions in multi-class classification problems.
What does the Max-Pooling operation do in CNNs?
Max-Pooling is a down-sampling operation that selects the maximum value within a window, reducing dimensionality and highlighting the most activated features.
What does the ROC curve represent in binary classification systems?
The ROC curve plots the true positive rate against the false positive rate at various threshold settings, showing the trade-off between sensitivity and specificity.
How does the ReLU activation function mitigate the vanishing gradient problem?
ReLU (Rectified Linear Unit) mitigates the vanishing gradient problem with its linear, non-saturating form, preventing gradients from becoming very small during training.
What is the purpose of hard-negative mining in machine learning?
Hard-negative mining focuses on selecting difficult-to-classify negative samples, improving model performance by ensuring it learns from the most challenging examples.
How does dropout help in preventing overfitting in neural networks?
Dropout prevents overfitting by randomly dropping units and their connections during training, encouraging the network to learn robust features.
List one strategy to combat overfitting in neural networks.
Implementing L2 regularization adds a penalty on large weights to the loss function, helping to prevent the model from overfitting.
What information does the histogram of an image provide?
An image histogram displays the frequency of pixel intensities, offering insights into the contrast, brightness, and intensity distribution of the image.
How are trainable parameters calculated in a convolutional layer?
In a convolutional layer, the number of parameters is calculated as (filter width × filter height × input depth + 1) × number of filters, where “+1” accounts for the bias term.
What characterizes a valid convolution operation?
Valid convolution ensures the filter fits entirely within the input image bounds, leading to a smaller output size.
What does sensitivity measure in a binary classification system?
Sensitivity, or true positive rate, measures the proportion of actual positives correctly identified by the classifier.
What is binary morphology used for in image processing?
Binary morphology uses structuring elements to manipulate shapes in binary images for purposes like noise removal, shape analysis, and segmentation.