Laboratory Task 3

Laboratory Task 3#


Instruction: Perform a forward and backward propagation in python using the inputs from Laboratory Task 2.

import numpy as np
# Input and target
x = np.array([1, 0, 1])
y = np.array([1])

# Initialize parameters
np.random.seed(42)
w = np.random.randn(3)   # 3 input weights
b = np.random.randn()    # bias

# Learning rate
lr = 0.001
# ----- Forward Propagation -----

z = np.dot(w, x) + b         # weighted sum
a = np.maximum(0, z)         # ReLU activation

# Compute Mean Squared Error loss
loss = 0.5 * (a - y)**2

print("Forward Propagation:")
print(f"z (weighted sum): {z:.4f}")
print(f"a (ReLU output): {a:.4f}")
print(f"loss: {loss[0]:.6f}")
Forward Propagation:
z (weighted sum): 2.6674
a (ReLU output): 2.6674
loss: 1.390166
# ----- Backward Propagation -----

# Derivative of loss wrt output a
dL_da = a - y

# Derivative of ReLU wrt z
da_dz = 1.0 if z > 0 else 0.0

# Chain rule: dL/dz
dL_dz = dL_da * da_dz

# Gradients for weights and bias
dL_dw = dL_dz * x
dL_db = dL_dz

# ----- Parameter Update -----
w -= lr * dL_dw
b -= lr * dL_db

print("Backward Propagation:")
print(f"dL/da: {dL_da}")
print(f"da/dz (ReLU'): {da_dz}")
print(f"dL/dz: {dL_dz}")
print(f"dL/dw: {dL_dw}")
print(f"dL/db: {dL_db}")
print("\nUpdated weights:", w)
print("Updated bias:", b)
Backward Propagation:
dL/da: [1.66743255]
da/dz (ReLU'): 1.0
dL/dz: [1.66743255]
dL/dw: [1.66743255 0.         1.66743255]
dL/db: [1.66743255]

Updated weights: [ 0.49504672 -0.1382643   0.64602111]
Updated bias: [1.52136242]