Week 03 | SKILL-019

The Jacobian Method: Velocity-Based Inverse Kinematics

Published: April 9, 2026 | Author: Smartotics Learning Journey | Reading Time: 12 min

Yesterday we solved IK with beautiful closed-form equations. But what if no such equations exist? Enter the Jacobian matrix — a tool that relates how fast each joint moves to how fast the end-effector moves. By inverting this relationship, we can solve IK for any robot.

What Is the Jacobian?

The Jacobian matrix J captures the partial derivatives of the end-effector position (and orientation) with respect to each joint variable:

$$\dot{\mathbf{x}} = \mathbf{J}(\mathbf{q}) \cdot \dot{\mathbf{q}}$$

Where:

For a planar 2-DOF arm with joint angles θ₁, θ₂:

$$J = \begin{bmatrix} \frac{\partial p_x}{\partial \theta_1} & \frac{\partial p_x}{\partial \theta_2} \ \frac{\partial p_y}{\partial \theta_1} & \frac{\partial p_y}{\partial \theta_2} \end{bmatrix} = \begin{bmatrix} -L_1 s_1 - L_2 s_{12} & -L_2 s_{12} \ L_1 c_1 + L_2 c_{12} & L_2 c_{12} \end{bmatrix}$$

Where s₁ = sin(θ₁), c₁ = cos(θ₁), s₁₂ = sin(θ₁+θ₂), etc.

From Velocity to Position: The IK Trick

We want to find joint angles for a target position, not velocities. The insight is:

  1. Compute the error between current and target: e = x_targetx_current
  2. Use the Jacobian to compute joint angle corrections: Δq = J⁻¹ · e
  3. Update: qq + Δq
  4. Repeat until convergence

This is essentially Newton’s method applied to the FK equations!

Computing the Jacobian: Two Approaches

Analytical Jacobian

Differentiate the forward kinematics equations directly. We did this above for the 2-DOF arm.

Geometric Jacobian

Uses the cross-product formula — works for any serial chain without differentiating FK:

For joint i, the i-th column of J is:

$$J_i = \begin{cases} z_{i-1} \times (o_n - o_{i-1}) & \text{if revolute} \ z_{i-1} & \text{if prismatic} \end{cases}$$

Where z_{i-1} is the rotation axis of joint i, o_n is the end-effector origin, and o_{i-1} is the origin of frame i-1.

The geometric method is more general and easier to implement programmatically.

The Pseudoinverse

What if J isn’t square (e.g., 6 joints, 3 position constraints)? We can’t simply invert it. Instead, we use the Moore-Penrose pseudoinverse J⁺:

$$\Delta\mathbf{q} = \mathbf{J}^+ \cdot \mathbf{e} = \mathbf{J}^T (\mathbf{J} \mathbf{J}^T)^{-1} \cdot \mathbf{e}$$

For redundant robots (more DOF than constraints), J⁺ gives the minimum-norm solution — the smallest joint change that achieves the desired motion. This is incredibly useful and we’ll explore it more in D05.

Python Implementation

import numpy as np

def forward_kinematics_2dof(q, L1, L2):
    """Compute end-effector position for a 2-DOF planar arm."""
    theta1, theta2 = q
    px = L1 * np.cos(theta1) + L2 * np.cos(theta1 + theta2)
    py = L1 * np.sin(theta1) + L2 * np.sin(theta1 + theta2)
    return np.array([px, py])

def jacobian_2dof(q, L1, L2):
    """Compute the 2x2 Jacobian for a 2-DOF planar arm."""
    theta1, theta2 = q
    s1 = np.sin(theta1)
    c1 = np.cos(theta1)
    s12 = np.sin(theta1 + theta2)
    c12 = np.cos(theta1 + theta2)
    
    J = np.array([
        [-L1 * s1 - L2 * s12,  -L2 * s12],
        [ L1 * c1 + L2 * c12,   L2 * c12]
    ])
    return J

def ik_jacobian(target, q_init, L1, L2, max_iter=1000, tol=1e-4, alpha=0.5):
    """
    Solve IK using the Jacobian pseudoinverse method.
    
    Args:
        target: Desired end-effector position [px, py]
        q_init: Initial joint angles [theta1, theta2]
        L1, L2: Link lengths
        max_iter: Maximum iterations
        tol: Convergence tolerance
        alpha: Step size (damping factor for stability)
    
    Returns:
        (q_final, converged, error_history)
    """
    q = np.array(q_init, dtype=float)
    errors = []
    
    for i in range(max_iter):
        x_current = forward_kinematics_2dof(q, L1, L2)
        error = target - x_current
        err_norm = np.linalg.norm(error)
        errors.append(err_norm)
        
        if err_norm < tol:
            return q, True, errors
        
        J = jacobian_2dof(q, L1, L2)
        # Use pseudoinverse (np.linalg.pinv handles singular cases)
        dq = np.linalg.pinv(J) @ error
        
        q += alpha * dq
    
    return q, False, errors

# Solve IK for target (3.5, 1.0) starting from a random config
target = np.array([3.5, 1.0])
q_init = np.array([0.3, -0.5])

q_solution, converged, errors = ik_jacobian(
    target, q_init, L1=2.0, L2=2.5, alpha=0.8
)

print(f"Converged: {converged}")
print(f"Joint angles: θ₁={np.degrees(q_solution[0]):.1f}°, θ₂={np.degrees(q_solution[1]):.1f}°")
print(f"Final position: {forward_kinematics_2dof(q_solution, 2.0, 2.5)}")
print(f"Final error: {errors[-1]:.6f}")

Singularity Handling with Damped Least Squares

Near singularities, J becomes ill-conditioned and J⁺ produces huge joint velocities. The Damped Least Squares (DLS) method (also called Levenberg-Marquardt) adds a damping term:

$$\Delta\mathbf{q} = \mathbf{J}^T(\mathbf{J}\mathbf{J}^T + \lambda^2 \mathbf{I})^{-1} \cdot \mathbf{e}$$

The parameter λ controls damping: larger λ means smoother but slower convergence. A common strategy is to set λ based on the manipulability (det(JJ^T)).

def ik_damped_ls(target, q_init, L1, L2, lam=0.1, max_iter=500, tol=1e-4):
    """IK with Damped Least Squares for singularity robustness."""
    q = np.array(q_init, dtype=float)
    
    for _ in range(max_iter):
        x = forward_kinematics_2dof(q, L1, L2)
        error = target - x
        
        if np.linalg.norm(error) < tol:
            return q, True
        
        J = jacobian_2dof(q, L1, L2)
        JJT = J @ J.T + lam**2 * np.eye(2)
        dq = J.T @ np.linalg.solve(JJT, error)
        
        q += dq
    
    return q, False

Advantages and Limitations

Advantages:

Limitations:

Key Takeaways

Next Steps

Tomorrow we’ll dive deeper into numerical iterative methods including Newton-Raphson and the Cyclic Coordinate Descent (CCD) algorithm — techniques that build on the Jacobian foundation while adding clever strategies for faster, more reliable convergence.