== Lab 2 ==
== Simple regression ==
Write an iterative loop which optimizes a solution to the [[https://en.wikipedia.org/wiki/Rosenbrock_function | banana function]] using gradient descent in Pytorch.
### Simple regressor: Optimizing bivariate rosenbrock function ###
### Rosenbrock function: f(x,y) = (a-x)^2 + b(y-x^2)^2 with global
### Min at (x,y) = (a,a^2). For a=1, b=100 this is (1,1)
# Instructions:
# Clear all previous variables if there are any
import sys
sys.modules[__name__].__dict__.clear()
# Import necessary modules (numpy, pytorch)
# ..
# ..
# Define variables that we will be using and minimizing
# a =
# b =
# x =
# y =
# Define learning rate and iterations
# lr = ..
# iters = ..
# Define the function that we will be optimizing
# def rb_fun(.....)
print("Initial values: x: {}, y: {}, f(x,y): {}".format(x, y, rb_fun(...)))
# Iterate
for i in range(iters):
# Forward pass:
# ...
# Backward pass
# ..
# The T.no_grad() context tells PT not to record the operations done within this context onto the computational graph.
with T.no_grad():
# Apply gradients. x.grad gives the gradient of the variable
# ..
# Manually zero the gradients after updating weights. x.grad.zero_() zeroes the grad of the variable
# ..
if i > 0 and i % 10 == 0:
print("Iteration: {}/{}, x,y: {},{}, loss: {}".format(i + 1, iters, x, y, loss))
== Basic neural networks in Pytorch ==
write a simple neural network class which fits to a given toy dataset by classifying the input to either a 0 or 1 class.
You can start by defining a single layer linear network (perceptron),
and then add layers after testing it. When adding more layers you will need to implement a non-linearity which will go
between the layers.
# Instructions:
# Clear all previous variables if there are any
import sys
sys.modules[__name__].__dict__.clear()
# Import necessary modules (numpy, pytorch)
# ..
# ..
# Make a neural network class which implements the init and forward pass functions.
class simple_NN:
# This function is the constructor of the class, it's called when you make an object simple_NN(args*)
def __init__(self, arg1, ...):
# Define weights and biases
# self.w0 = ...
# self.b0 = ...
# This function defines the forward pass (computation of the neural network)
def forward(self, x):
#out = ...
return out
if __name__ == "__main__":
in_dim = 3
out_dim = 1
hid_dim = 12
# Toy dataset:
X = T.tensor([[1,0,0], [1,0,-3], [2,-2,3], [0,0,0], [3,2,3], [-2,-2,-2]], dtype=T.float32)
Y = T.tensor([[0],[0],[0],[1],[1],[1]] , dtype=T.float32)
# Make the network object
nn =
# Train on dataset
iters = ...
lr = ...
for i in range(iters):
# Forward pass (use an iterative or random subset of X !!)
# Calculate loss (you can try using just MSE, but try preferably a crossentropy loss for classification https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html)
# Backwards pass
# Apply gradients
# Clear gradients
if i % (iters / 10) == 0:
print(f"Iter {i}/{iters}, loss: {loss}")
# Evaluate the predictions visually at the end
print(f"Y_ = {nn.forward(X)}, Y = {Y}")