cnn model Flashcards

(7 cards)

1
Q

What is npz file?

A

An NPZ file is a NumPy Zip archive that stores multiple NumPy arrays in a compressed format. It is useful for saving and loading multiple arrays efficiently.

Saving multiple arrays to an NPZ file:
import numpy as np
arr1 = np.array([1, 2, 3])
arr2 = np.array([[4, 5, 6], [7, 8, 9]])
np.savez(“data.npz”, first=arr1, second=arr2)

Loading an NPZ file:
loaded = np.load(“data.npz”)
print(loaded[“first”]) # Output: [1 2 3]
print(loaded[“second”]) # Output: [[4 5 6] [7 8 9]]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How Images are Stored in an NPZ File?

A

Images are first converted into NumPy arrays.

Each image is stored as a 2D (grayscale) or 3D (color) array.

Multiple images are saved together in a compressed format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What function that converts labels (categorical data) into a one-hot encoded format?

A

from tensorflow.keras.utils import to_categorical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the layers in CNN in tensorflow

A

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how to add layers in cnn in tensorflow?

A

model = Sequential([
Conv2D(32, (3, 3), activation=’relu’, input_shape=(32, 32, 1)), # Adjust if not
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation=’relu’),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation=’relu’),
Dense(89, activation=’softmax’) # 89 classes
])

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How to compile cnn in tensorflow?

A

model.compile(optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[‘accuracy’])
model.summary() # See the architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to train the Model in cnn

A

model.fit(x_train, y_train, epochs=5, batch_size=32, validation_data=(x_test, y_test))

Parameters in fit()
✅ x_train, y_train
x_train → Input images (features) used for training
y_train → Corresponding labels (output) for training

✅ epochs=5
Number of times the model sees the entire dataset during training.
More epochs = better learning but risk of overfitting.

✅ batch_size=32
The model processes 32 samples at a time before updating weights.
Smaller batch size = better generalization but slower training.
Larger batch size = faster training but may need more memory.

✅ validation_data=(x_test, y_test)
Evaluates the model on unseen test data after each epoch.
Helps track how well the model generalizes to new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly