%matplotlib inline
It’s a Python-based scientific computing package targeted at two sets of audiences:
Tensors are similar to NumPy’s ndarrays, with the added benefit of the use of a GPU to accelerate computation speed.
import torch
Construct a 5x3 matrix, uninitialized:
x = torch.zeros(5, 3)
print(x)
tensor([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]])
Construct a randomly initialized matrix:
x = torch.rand(5, 3)
print(x)
tensor([[0.4898, 0.7586, 0.4739], [0.4444, 0.5913, 0.8829], [0.2168, 0.1519, 0.7091], [0.3477, 0.4247, 0.7335], [0.2396, 0.5724, 0.7993]])
Construct a tensor from data:
x = torch.tensor([5.5, 3])
print(x)
tensor([5.5000, 3.0000])
or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor:
# new_* methods create a tensor of the same type:
x = x.new_ones(5, 3)
print(x)
# create a tensor like x and override dtype:
x = torch.randn_like(x, dtype=torch.double)
print(x) # result has the same size
tensor([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) tensor([[ 0.7564, 0.9491, -0.5533], [-0.2931, -0.2400, 0.6996], [ 0.9043, -0.8304, 1.6302], [-1.0803, -0.8668, 0.6330], [-1.4586, 0.0956, 0.4361]], dtype=torch.float64)
Get its size:
print(x.size())
torch.Size([5, 3])
y = torch.rand(5, 3)
print(x + y)
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
You can also use the torch add function:
print(torch.add(x, y))
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
You also have the option of providing an output tensor as argument:
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
Addition: in-place
# adds x to y
y.add_(x)
print(y)
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
tensor([[-0.3167, 1.1979, 1.2882], [ 0.7474, -1.4148, 0.4302], [ 1.8626, 1.9702, 0.0323], [ 1.4492, 0.9571, 0.5144], [-0.2637, -0.6164, 0.7562]])
Any operation that mutates a tensor in-place has an added ``_`` in the name. For example: x.copy_(y), x.t_(), will change x.
You can use standard NumPy-like indexing as we have gotten used to:
print(x[:, 1])
tensor([ 0.7548, -2.2326, 1.2403, 0.6962, -0.9840])
Resizing: If you want to resize/reshape tensor, you can use torch.view
:
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
You can read all about pytorch tensors, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., in the pytorch documentation.
Converting a Torch Tensor to a NumPy array and vice versa is a easy.
The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b)
[ 1. 1. 1. 1. 1.]
See how the numpy array changes in value:
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
tensor([2., 2., 2., 2., 2.]) [ 2. 2. 2. 2. 2.]
All tensors on CPU except a CharTensor support converting to NumPy and back.
Tensors can be moved onto any device using the .to
method.
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
# Build a computational graph.
y = w * x + b
y.backward()
print(x.grad) # x.grad = 2
print(w.grad) # w.grad = 1
print(b.grad) # b.grad = 1
print(y.grad)
tensor(2.) tensor(1.) tensor(1.) None