Robot Control Algorithms: PID and Model Predictive Control
Control algorithms are the brain of robot movement, transforming desired trajectories into precise motor commands. Today, we’ll explore two fundamental control approaches: Proportional-Integral-Derivative (PID) controllers and Model Predictive Control (MPC). These algorithms form the backbone of modern robot control systems.
Introduction to Robot Control
Robot control involves converting desired motion (position, velocity, or trajectory) into actual motor commands. The key challenges include:
- Precision: Achieving accurate tracking of desired trajectories
- Stability: Ensuring the robot doesn’t oscillate or become unstable
- Robustness: Handling disturbances, uncertainties, and model errors
- Real-time computation: Generating control signals within tight time constraints
PID Controllers
PID controllers are the most widely used control algorithm in robotics due to their simplicity, effectiveness, and robustness.
PID Fundamentals
A PID controller calculates an error value as the difference between a measured process variable and a desired setpoint. The controller attempts to minimize the error by adjusting the process control inputs.
The PID algorithm consists of three terms:
- Proportional (P): Reacts to the current error
- Integral (I): Reacts to the accumulation of past errors
- Derivative (D): Reacts to the rate of change of error
The control output is: u(t) = Kp·e(t) + Ki·∫e(t)dt + Kd·de(t)/dt
PID Implementation
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
class PIDController:
def __init__(self, Kp, Ki, Kd, setpoint=0, output_limits=None):
"""
Initialize PID controller
Args:
Kp: Proportional gain
Ki: Integral gain
Kd: Derivative gain
setpoint: Desired value
output_limits: (min, max) output limits
"""
self.Kp = Kp
self.Ki = Ki
self.Kd = Kd
self.setpoint = setpoint
self.output_limits = output_limits
# Internal state
self.integral = 0
self.previous_error = 0
self.previous_time = None
def update(self, current_value, current_time):
"""
Update PID controller and return control output
Args:
current_value: Current measured value
current_time: Current time
Returns:
Control output
"""
# Calculate error
error = self.setpoint - current_value
# Time difference
if self.previous_time is None:
dt = 0.01 # Default time step
else:
dt = current_time - self.previous_time
# Integral term
self.integral += error * dt
# Derivative term
if dt > 0:
derivative = (error - self.previous_error) / dt
else:
derivative = 0
# Calculate control output
output = (self.Kp * error +
self.Ki * self.integral +
self.Kd * derivative)
# Apply output limits
if self.output_limits:
output = np.clip(output, self.output_limits[0], self.output_limits[1])
# Update state
self.previous_error = error
self.previous_time = current_time
return output
def reset(self):
"""Reset PID controller state"""
self.integral = 0
self.previous_error = 0
self.previous_time = None
# Example PID control for a simple system
def pid_control_example():
"""Demonstrate PID control on a simple first-order system"""
# System parameters
system_gain = 1.0
system_time_constant = 1.0
# PID parameters
Kp = 2.0
Ki = 0.5
Kd = 0.1
pid = PIDController(Kp, Ki, Kd, setpoint=1.0, output_limits=(-5, 5))
# Simulation parameters
dt = 0.01
total_time = 10.0
time_steps = int(total_time / dt)
# Arrays for storing results
times = np.linspace(0, total_time, time_steps)
setpoints = np.ones_like(times)
outputs = np.zeros_like(times)
measurements = np.zeros_like(times)
# Initial conditions
current_value = 0.0
# Run simulation
for i, t in enumerate(times):
# Get PID output
control_output = pid.update(current_value, t)
outputs[i] = control_output
# Simulate system response
# Simple first-order system: dy/dt = (K*u - y)/tau
def system_dynamics(y, t, u):
return (system_gain * u - y) / system_time_constant
# Integrate one step
if i == 0:
next_value = current_value + system_dynamics(current_value, t, control_output) * dt
else:
next_value = current_value + system_dynamics(current_value, t, control_output) * dt
measurements[i] = current_value
current_value = next_value
# Plot results
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(12, 10))
# Plot setpoint and measurement
ax1.plot(times, setpoints, 'r--', linewidth=2, label='Setpoint')
ax1.plot(times, measurements, 'b-', linewidth=2, label='Measurement')
ax1.set_ylabel('Value')
ax1.set_title('PID Control Response')
ax1.legend()
ax1.grid(True, alpha=0.3)
# Plot control output
ax2.plot(times, outputs, 'g-', linewidth=2)
ax2.set_ylabel('Control Output')
ax2.set_title('PID Control Signal')
ax2.grid(True, alpha=0.3)
# Plot error
errors = setpoints - measurements
ax3.plot(times, errors, 'r-', linewidth=2)
ax3.set_xlabel('Time (s)')
ax3.set_ylabel('Error')
ax3.set_title('Tracking Error')
ax3.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
return times, setpoints, measurements, outputs
# Run PID control example
times, setpoints, measurements, outputs = pid_control_example()
PID Tuning Methods
Tuning PID parameters is crucial for good performance. Let’s implement several tuning methods:
class PIDTuner:
def __init__(self, system_func):
"""
Initialize PID tuner
Args:
system_func: Function that simulates system response
"""
self.system_func = system_func
def ziegler_nichols(self, setpoint=1.0, max_output=10.0):
"""
Ziegler-Nichols tuning method
Args:
setpoint: Desired setpoint
max_output: Maximum control output
Returns:
PID parameters (Kp, Ki, Kd)
"""
# Step 1: Find ultimate gain and period
Kp_ultimate = self._find_ultimate_gain(setpoint, max_output)
Tu = self._find_ultimate_period(Kp_ultimate, setpoint)
# Ziegler-Nichols parameters
Kp = 0.6 * Kp_ultimate
Ki = 1.2 * Kp_ultimate / Tu
Kd = 0.075 * Kp_ultimate * Tu
return Kp, Ki, Kd
def _find_ultimate_gain(self, setpoint, max_output):
"""Find ultimate gain using ultimate sensitivity method"""
# Binary search for ultimate gain
low, high = 0, max_output
tolerance = 0.1
max_iterations = 20
for _ in range(max_iterations):
Kp = (low + high) / 2
# Test with proportional control only
pid = PIDController(Kp, 0, 0, setpoint)
response = self._simulate_step_response(pid, setpoint)
# Check for sustained oscillation
if self._has_sustained_oscillation(response):
low = Kp
else:
high = Kp
if high - low < tolerance:
break
return (low + high) / 2
def _find_ultimate_period(self, Kp_ultimate, setpoint):
"""Find ultimate period"""
pid = PIDController(Kp_ultimate, 0, 0, setpoint)
response = self._simulate_step_response(pid, setpoint)
# Find period of oscillation
times, values = response
peaks = []
# Find peaks in the response
for i in range(1, len(values) - 1):
if values[i] > values[i-1] and values[i] > values[i+1]:
peaks.append(times[i])
if len(peaks) >= 2:
# Average period between peaks
periods = [peaks[i+1] - peaks[i] for i in range(len(peaks)-1)]
return np.mean(periods)
return 2.0 # Default period
def _has_sustained_oscillation(self, response, threshold=0.1):
"""Check if response has sustained oscillation"""
times, values = response
# Check if values oscillate around setpoint
mean_value = np.mean(values[-100:]) # Last 100 points
std_value = np.std(values[-100:])
# If standard deviation is significant, we have oscillation
return std_value > threshold
def _simulate_step_response(self, pid, setpoint, duration=20.0, dt=0.01):
"""Simulate step response"""
times = np.arange(0, duration, dt)
values = []
current_value = 0.0
for t in times:
control_output = pid.update(current_value, t)
# Simple system simulation
current_value += control_output * dt * 0.1 # Simple integration
values.append(current_value)
return times, np.array(values)
# Example PID tuning
def system_simulation(y, t, u):
"""Simple second-order system"""
dydt = -2*y + u
return dydt
def pid_tuning_example():
"""Demonstrate PID tuning"""
# Create system simulator
def simulate_system(pid, setpoint=1.0, duration=20.0, dt=0.01):
times = np.arange(0, duration, dt)
values = []
current_value = 0.0
for t in times:
control_output = pid.update(current_value, t)
current_value += system_simulation(current_value, t, control_output) * dt
values.append(current_value)
return times, np.array(values)
# Create PID tuner
tuner = PIDTuner(simulate_system)
# Get Ziegler-Nichols parameters
Kp, Ki, Kd = tuner.ziegler_nichols()
print(f"Ziegler-Nichols parameters: Kp={Kp:.3f}, Ki={Ki:.3f}, Kd={Kd:.3f}")
# Test tuned parameters
pid_tuned = PIDController(Kp, Ki, Kd, setpoint=1.0)
times, response = simulate_system(pid_tuned)
# Plot results
plt.figure(figsize=(12, 8))
plt.subplot(2, 1, 1)
plt.plot(times, np.ones_like(times), 'r--', linewidth=2, label='Setpoint')
plt.plot(times, response, 'b-', linewidth=2, label='Response')
plt.ylabel('Value')
plt.title('PID Control with Ziegler-Nichols Tuning')
plt.legend()
plt.grid(True, alpha=0.3)
plt.subplot(2, 1, 2)
errors = np.ones_like(times) - response
plt.plot(times, errors, 'r-', linewidth=2)
plt.xlabel('Time (s)')
plt.ylabel('Error')
plt.title('Tracking Error')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
# Run PID tuning example
pid_tuning_example()
Model Predictive Control (MPC)
MPC is an advanced control strategy that uses a dynamic model to predict future behavior and optimize control actions over a prediction horizon.
MPC Fundamentals
MPC works by:
- Model prediction: Use a dynamic model to predict future system behavior
- Optimization: Solve an optimization problem to find optimal control inputs
- Receding horizon: Apply only the first control input and repeat the process
- Constraint handling: Explicitly handle input and state constraints
MPC Implementation
class MPCController:
def __init__(self, model_matrix, prediction_horizon=10, control_horizon=5):
"""
Initialize MPC controller
Args:
model_matrix: State-space model matrices (A, B)
prediction_horizon: Number of steps to predict ahead
control_horizon: Number of control steps to optimize
"""
self.A, self.B = model_matrix
self.prediction_horizon = prediction_horizon
self.control_horizon = control_horizon
self.dt = 0.1 # Time step
# Cost function weights
self.Q = np.eye(self.A.shape[0]) # State cost
self.R = np.eye(self.B.shape[1]) # Control cost
# Constraints
self.u_min = -1.0
self.u_max = 1.0
self.x_min = -10.0
self.x_max = 10.0
def predict(self, x0, U):
"""
Predict future states given initial state and control sequence
Args:
x0: Initial state
U: Control sequence
Returns:
Predicted states
"""
X = np.zeros((self.prediction_horizon, self.A.shape[0]))
X[0] = x0
for k in range(1, self.prediction_horizon):
if k < len(U):
u = U[k]
else:
u = U[-1] # Hold last control input
X[k] = self.A @ X[k-1] + self.B @ u
return X
def cost_function(self, U, x0, x_ref):
"""
Calculate cost function for control sequence
Args:
U: Control sequence
x0: Initial state
x_ref: Reference trajectory
Returns:
Cost value
"""
X = self.predict(x0, U)
# Tracking error cost
tracking_cost = 0
for k in range(self.prediction_horizon):
if k < len(x_ref):
error = X[k] - x_ref[k]
tracking_cost += error.T @ self.Q @ error
# Control effort cost
control_cost = 0
for k in range(min(len(U), self.control_horizon)):
control_cost += U[k].T @ self.R @ U[k]
return tracking_cost + control_cost
def optimize(self, x0, x_ref):
"""
Optimize control sequence using gradient descent
Args:
x0: Current state
x_ref: Reference trajectory
Returns:
Optimal control sequence
"""
# Initial guess (zero control)
U = np.zeros((self.control_horizon, self.B.shape[1]))
# Optimization parameters
learning_rate = 0.01
max_iterations = 1000
tolerance = 1e-6
# Gradient descent
for iteration in range(max_iterations):
# Calculate cost
current_cost = self.cost_function(U, x0, x_ref)
# Calculate gradients
gradients = np.zeros_like(U)
for k in range(self.control_horizon):
# Perturb control input
U_perturbed = U.copy()
U_perturbed[k] += 0.001
# Calculate cost with perturbation
perturbed_cost = self.cost_function(U_perturbed, x0, x_ref)
# Numerical gradient
gradients[k] = (perturbed_cost - current_cost) / 0.001
# Update control sequence
U -= learning_rate * gradients
# Apply constraints
U = np.clip(U, self.u_min, self.u_max)
# Check convergence
if np.linalg.norm(gradients) < tolerance:
break
return U
def control_step(self, x_current, x_ref):
"""
Perform one MPC control step
Args:
x_current: Current state
x_ref: Reference trajectory
Returns:
Control input for current step
"""
# Optimize control sequence
U_optimal = self.optimize(x_current, x_ref)
# Return first control input
return U_optimal[0]
# Example MPC control
def mpc_control_example():
"""Demonstrate MPC control on a simple system"""
# System: Double integrator (position and velocity)
A = np.array([[0, 1], [0, 0]]) # State matrix
B = np.array([[0], [1]]) # Control matrix
model_matrix = (A, B)
# Create MPC controller
mpc = MPCController(model_matrix, prediction_horizon=20, control_horizon=5)
# Simulation parameters
dt = 0.1
total_time = 20.0
time_steps = int(total_time / dt)
# Arrays for storing results
times = np.linspace(0, total_time, time_steps)
states = np.zeros((time_steps, 2))
controls = np.zeros(time_steps)
references = np.zeros((time_steps, 2))
# Initial conditions
x_current = np.array([0.0, 0.0]) # [position, velocity]
# Reference trajectory (step response)
for i, t in enumerate(times):
if t < 5:
references[i] = [0, 0]
elif t < 15:
references[i] = [1, 0]
else:
references[i] = [1, 0]
# Run simulation
for i in range(time_steps):
# Get reference for prediction horizon
x_ref = references[i:i+mpc.prediction_horizon]
# Get MPC control
u = mpc.control_step(x_current, x_ref)
controls[i] = u[0]
# Update state
x_current = A @ x_current + B @ u
states[i] = x_current
# Plot results
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(12, 10))
# Plot position
ax1.plot(times, references[:, 0], 'r--', linewidth=2, label='Reference')
ax1.plot(times, states[:, 0], 'b-', linewidth=2, label='Position')
ax1.set_ylabel('Position')
ax1.set_title('MPC Control Response')
ax1.legend()
ax1.grid(True, alpha=0.3)
# Plot velocity
ax2.plot(times, references[:, 1], 'r--', linewidth=2, label='Reference')
ax2.plot(times, states[:, 1], 'g-', linewidth=2, label='Velocity')
ax2.set_ylabel('Velocity')
ax2.set_title('Velocity Response')
ax2.legend()
ax2.grid(True, alpha=0.3)
# Plot control input
ax3.plot(times, controls, 'm-', linewidth=2)
ax3.set_xlabel('Time (s)')
ax3.set_ylabel('Control Input')
ax3.set_title('MPC Control Signal')
ax3.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
return times, states, controls, references
# Run MPC control example
times, states, controls, references = mpc_control_example()
Advanced MPC with Constraints
Let’s enhance the MPC controller with explicit constraint handling:
class ConstrainedMPC(MPCController):
def __init__(self, model_matrix, prediction_horizon=10, control_horizon=5):
super().__init__(model_matrix, prediction_horizon, control_horizon)
# Extended constraints
self.delta_u_min = -0.5 # Control input rate limit
self.delta_u_max = 0.5
# Setup optimization problem
self.setup_optimization()
def setup_optimization(self):
"""Setup optimization problem matrices"""
n_states = self.A.shape[0]
n_controls = self.B.shape[1]
# Build matrices for quadratic programming
self.H = self._build_H_matrix()
self.f = np.zeros(self.control_horizon * n_controls)
# Build constraint matrices
self.A_eq = self._build_equality_constraints()
self.b_eq = np.zeros(n_states)
self.A_ineq = self._build_inequality_constraints()
self.b_ineq = np.zeros(self.A_ineq.shape[0])
def _build_H_matrix(self):
"""Build H matrix for quadratic programming"""
n_controls = self.B.shape[1]
H = np.zeros((self.control_horizon * n_controls,
self.control_horizon * n_controls))
# Fill H matrix with Q and R terms
for i in range(self.control_horizon):
for j in range(self.control_horizon):
if i == j:
# Diagonal terms
H[i*n_controls:(i+1)*n_controls,
j*n_controls:(j+1)*n_controls] = self.R
return H
def _build_equality_constraints(self):
"""Build equality constraint matrix"""
n_states = self.A.shape[0]
n_controls = self.B.shape[1]
A_eq = np.zeros((n_states, self.control_horizon * n_controls))
# First constraint: x1 = A*x0 + B*u0
A_eq[:, :n_controls] = self.B.T
return A_eq
def _build_inequality_constraints(self):
"""Build inequality constraint matrix"""
n_controls = self.B.shape[1]
n_constraints = 4 * self.control_horizon # u_min, u_max, delta_u_min, delta_u_max
A_ineq = np.zeros((n_constraints, self.control_horizon * n_controls))
b_ineq = np.zeros(n_constraints)
# Control input constraints
for i in range(self.control_horizon):
# u_min constraint
A_ineq[4*i, i*n_controls:(i+1)*n_controls] = -np.eye(n_controls)
b_ineq[4*i] = -self.u_min
# u_max constraint
A_ineq[4*i+1, i*n_controls:(i+1)*n_controls] = np.eye(n_controls)
b_ineq[4*i+1] = self.u_max
# delta_u_min constraint
if i > 0:
A_ineq[4*i+2, (i-1)*n_controls:i*n_controls] = np.eye(n_controls)
A_ineq[4*i+2, i*n_controls:(i+1)*n_controls] = -np.eye(n_controls)
b_ineq[4*i+2] = self.delta_u_min
# delta_u_max constraint
if i > 0:
A_ineq[4*i+3, (i-1)*n_controls:i*n_controls] = -np.eye(n_controls)
A_ineq[4*i+3, i*n_controls:(i+1)*n_controls] = np.eye(n_controls)
b_ineq[4*i+3] = self.delta_u_max
return A_ineq, b_ineq
def solve_qp(self, H, f, A_eq, b_eq, A_ineq, b_ineq):
"""Solve quadratic programming problem"""
# Note: This is a simplified QP solver
# In practice, you would use a proper QP solver like CVXOPT or OSQP
# For demonstration, use gradient descent with constraints
U = np.zeros(self.control_horizon * self.B.shape[1])
# Project onto feasible set
for iteration in range(100):
# Gradient descent
grad = 2 * H @ U + f
U -= 0.01 * grad
# Project onto constraints
U = np.clip(U, self.u_min, self.u_max)
return U.reshape(self.control_horizon, self.B.shape[1])
def optimize(self, x0, x_ref):
"""Optimize with constraints"""
# Build QP matrices
H = self._build_H_matrix()
f = np.zeros(self.control_horizon * self.B.shape[1])
# Solve QP
U = self.solve_qp(H, f, self.A_eq, self.b_eq, self.A_ineq, self.b_ineq)
return U
Robot Control Applications
1. Differential Drive Robot Control
class DifferentialDriveController:
def __init__(self, wheelbase=0.5, max_velocity=1.0, max_angular_velocity=2.0):
"""
Initialize differential drive controller
Args:
wheelbase: Distance between wheels
max_velocity: Maximum linear velocity
max_angular_velocity: Maximum angular velocity
"""
self.wheelbase = wheelbase
self.max_velocity = max_velocity
self.max_angular_velocity = max_angular_velocity
# PID controllers for linear and angular velocity
self.linear_pid = PIDController(2.0, 0.1, 0.5, output_limits=(-max_velocity, max_velocity))
self.angular_pid = PIDController(3.0, 0.1, 1.0, output_limits=(-max_angular_velocity, max_angular_velocity))
def inverse_kinematics(self, v, w):
"""
Convert velocities to wheel velocities
Args:
v: Linear velocity
w: Angular velocity
Returns:
Left and right wheel velocities
"""
v_left = v - w * self.wheelbase / 2
v_right = v + w * self.wheelbase / 2
return v_left, v_right
def control_step(self, current_pose, target_pose, dt=0.1):
"""
Perform one control step
Args:
current_pose: [x, y, theta] current pose
target_pose: [x, y, theta] target pose
dt: Time step
Returns:
[v_left, v_right] wheel velocities
"""
# Calculate errors
dx = target_pose[0] - current_pose[0]
dy = target_pose[1] - current_pose[1]
dtheta = target_pose[2] - current_pose[2]
# Normalize angle difference
while dtheta > np.pi:
dtheta -= 2 * np.pi
while dtheta < -np.pi:
dtheta += 2 * np.pi
# Distance to target
distance = np.sqrt(dx**2 + dy**2)
# Desired velocities
if distance > 0.1:
v_desired = min(0.5, distance) # Proportional control
else:
v_desired = 0
w_desired = dtheta * 2.0 # Proportional control for orientation
# PID control
v_actual = self.linear_pid.update(v_desired, 0)
w_actual = self.angular_pid.update(w_desired, 0)
# Convert to wheel velocities
v_left, v_right = self.inverse_kinematics(v_actual, w_actual)
return v_left, v_right
# Example differential drive control
def differential_drive_example():
"""Demonstrate differential drive control"""
# Create controller
controller = DifferentialDriveController()
# Simulation parameters
dt = 0.1
total_time = 10.0
time_steps = int(total_time / dt)
# Arrays for storing results
times = np.linspace(0, total_time, time_steps)
poses = np.zeros((time_steps, 3))
wheel_velocities = np.zeros((time_steps, 2))
# Initial pose
current_pose = np.array([0.0, 0.0, 0.0])
# Target pose
target_pose = np.array([3.0, 2.0, np.pi/4])
# Run simulation
for i in range(time_steps):
# Control step
v_left, v_right = controller.control_step(current_pose, target_pose, dt)
wheel_velocities[i] = [v_left, v_right]
# Update pose (simple integration)
v = (v_left + v_right) / 2
w = (v_right - v_left) / controller.wheelbase
current_pose[0] += v * np.cos(current_pose[2]) * dt
current_pose[1] += v * np.sin(current_pose[2]) * dt
current_pose[2] += w * dt
poses[i] = current_pose
# Plot results
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
# Plot robot trajectory
ax1.plot(poses[:, 0], poses[:, 1], 'b-', linewidth=2, label='Robot Path')
ax1.plot(poses[0, 0], poses[0, 1], 'go', markersize=10, label='Start')
ax1.plot(target_pose[0], target_pose[1], 'r*', markersize=15, label='Target')
ax1.plot(poses[-1, 0], poses[-1, 1], 'ro', markersize=10, label='End')
# Plot robot orientation
for i in range(0, len(poses), 10):
x, y, theta = poses[i]
dx = 0.2 * np.cos(theta)
dy = 0.2 * np.sin(theta)
ax1.arrow(x, y, dx, dy, head_width=0.1, head_length=0.05, fc='blue', ec='blue')
ax1.set_xlabel('X Position (m)')
ax1.set_ylabel('Y Position (m)')
ax1.set_title('Differential Drive Robot Trajectory')
ax1.legend()
ax1.grid(True, alpha=0.3)
ax1.set_aspect('equal')
# Plot wheel velocities
ax2.plot(times, wheel_velocities[:, 0], 'b-', linewidth=2, label='Left Wheel')
ax2.plot(times, wheel_velocities[:, 1], 'r-', linewidth=2, label='Right Wheel')
ax2.set_xlabel('Time (s)')
ax2.set_ylabel('Wheel Velocity (m/s)')
ax2.set_title('Wheel Velocities')
ax2.legend()
ax2.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
# Run differential drive example
differential_drive_example()
Key Takeaways
- PID Controllers: Simple, robust, and widely used control algorithm
- PID Tuning: Ziegler-Nichols and other methods for parameter optimization
- Model Predictive Control: Advanced control with prediction and optimization
- Constraint Handling: Explicit constraint satisfaction in MPC
- Robot Applications: Differential drive and other robot-specific control implementations
- Performance Trade-offs: PID is simple but limited, MPC is powerful but computationally expensive
Next Steps
Tomorrow, we’ll complete Week 4 with a comprehensive summary and preview of Week 5. We’ll review all the concepts covered this week and explore advanced topics in robot control and planning.
Practice Exercise: Implement a complete robot control system that combines path planning (from previous days) with PID/MPC control. Create a simulation environment that tests your system with various scenarios including disturbances, model uncertainties, and different robot types. Analyze the performance and compare different control strategies.