Intro. to pytorch Flashcards
(15 cards)
What is a torch.Tensor?
The primary data structure in PyTorch, a multi-dimensional array supporting GPU acceleration and autograd.
Name three factory methods to create tensors.
torch.zeros(shape), torch.ones(shape), torch.randn(shape) (also torch.arange, torch.linspace).
How do you slice a tensor to select specific elements or dimensions?
Use standard indexing and advanced indexing, e.g., tensor[:, 0], tensor[1:3, [0,2]].
What is the difference between in-place and out-of-place tensor operations?
In-place operations (e.g., tensor.add_()) modify data directly; out-of-place create a new tensor, preserving the original.
How can you change the shape of a tensor?
Use reshape/view for general re-shaping, flatten to collapse dimensions, or stack to combine tensors.
What do squeeze() and unsqueeze() do in PyTorch?
squeeze() removes dimensions of size 1; unsqueeze(dim) adds a dimension of size 1 at the specified index.
How do you convert between torch.Tensor and numpy.ndarray?
Call tensor.numpy() to get a NumPy array; use torch.from_numpy(ndarray) or torch.tensor(ndarray) to go back.
What is a PyTorch Dataset and how do you access data samples?
A wrapper around data implementing __len__() and __getitem__(); use dataset[i] to get (input, label).
How do you use DataLoader to iterate over a Dataset?
Wrap with DataLoader(dataset, batch_size, shuffle) to automatically handle batching and shuffling.
How do you enable gradient tracking for tensors?
Set requires_grad=True when creating the tensor or call tensor.requires_grad_().
Explain the purpose of loss.backward() in PyTorch.
Computes gradients of the loss with respect to all tensors with requires_grad=True via reverse-mode autodiff.
What must you do before calling optimizer.step()?
Call optimizer.zero_grad() to clear old gradients, preventing accumulation across backward passes.
How do you perform a parameter update step?
After loss.backward(), call optimizer.step() to adjust model parameters based on computed gradients.
How do you switch a model between training and evaluation modes?
Call model.train() for training (enables dropout, batchnorm updates) and model.eval() for inference (disables them).
What does torch.no_grad() do and why is it used?
Context manager that disables gradient computation for inference, reducing memory use and speeding up execution.