Definition

Joint space (also called configuration space or C-space) is the mathematical space where each dimension corresponds to one of a robot's joints. A robot's configuration is fully described by a vector of joint values — q = (q1, q2, ..., qn) — where each qi is a joint angle (for revolute joints) or a linear displacement (for prismatic joints). A 6-DOF robot arm lives in a 6-dimensional joint space; a humanoid with 30 actuated joints occupies a 30-dimensional space.

Joint space is the dual of task space (also called Cartesian space or operational space), which describes the end-effector's position and orientation in the physical world. The mapping from joint space to task space is forward kinematics; the reverse mapping is inverse kinematics. This fundamental distinction underpins nearly every decision in robot control, planning, and learning system design.

Joint Space vs Task Space

Joint space is the natural representation for the robot's actuators. Commands in joint space map directly to motor positions or velocities, making execution straightforward and predictable. Interpolating between two joint configurations produces smooth motor trajectories. However, the resulting end-effector path in Cartesian space may be curved, non-intuitive, or even pass through obstacles, because straight lines in joint space do not correspond to straight lines in Cartesian space.

Task space is the natural representation for the task. Humans think in Cartesian coordinates: "move the gripper 10cm to the right" or "follow this straight line to insert the peg." Task-space control produces intuitive end-effector paths but requires solving IK at every timestep, which introduces singularity issues, multiple solutions, and potential joint-limit violations. For redundant robots (7+ DOF), task-space commands leave null-space motion unspecified, which must be resolved by secondary objectives.

Most practical robot systems use a hybrid approach: task-level goals are specified in Cartesian space, converted to joint-space targets via IK, and then interpolated and executed in joint space with smoothing and constraint enforcement.

Joint Limits and Constraints

Physical robots impose hard constraints on joint space that define the feasible region of the configuration space:

  • Position limits — Each joint has minimum and maximum values (e.g., -170 to +170 degrees). These define a hyperrectangle in joint space. Configurations outside these limits are mechanically impossible.
  • Velocity limits — Maximum joint speeds (typically 50-200 deg/s for collaborative arms) constrain how fast the robot can move through joint space. Exceeding velocity limits triggers safety stops.
  • Acceleration limits — Bound the rate of velocity change, affecting trajectory smoothness and the forces experienced by the robot structure.
  • Torque limits — Maximum motor torques constrain what payloads and accelerations are achievable. Particularly important for whole-body control of humanoids.

The feasible joint space — configurations satisfying all limits while avoiding self-collision and environmental collisions — is called the free configuration space. Motion planning algorithms search for paths through this free space.

Joint Space in Robot Learning

The choice of action space is one of the most consequential design decisions when training robot policies. Joint-space policies — where the neural network directly outputs joint angles or joint velocities — have become the dominant approach in modern imitation learning:

Why joint-space policies are simpler: The mapping from policy output to motor command is direct, with no IK solver in the loop. This eliminates singularity failures, IK non-convergence, and the need for a kinematic model at inference time. Policies like ACT and Diffusion Policy operate in joint space by default.

Why joint-space policies are less interpretable: A joint-space trajectory is difficult for humans to visualize or verify. A 7-dimensional joint vector does not reveal whether the gripper is moving left, right, up, or down without running forward kinematics. This makes debugging and safety verification harder compared to Cartesian policies where the intent is geometrically obvious.

When to use task-space policies: Task-space action spaces are preferable when the task has strong Cartesian structure (e.g., wiping a surface along a plane, following a contour), when the policy must generalize across robots with different kinematic structures, or when Cartesian safety constraints must be enforced explicitly.

Comparison: Joint Space vs Task Space for Policy Learning

Joint space advantages: No IK needed at inference. No singularity issues. Direct motor control. Simpler training pipeline. Works well for single-robot systems. Dominant in current imitation learning (ACT, Diffusion Policy, LeRobot defaults).

Task space advantages: Human-interpretable actions. Easier to specify Cartesian constraints. Better cross-embodiment transfer (a Cartesian "move right 5cm" command works for any arm). Preferred by teleoperation interfaces and classical industrial programming.

Hybrid approaches: Some systems train policies in task space and use IK for execution. Others train in joint space but add Cartesian safety filters. Foundation models like Octo and RT-2 experiment with both, often tokenizing actions in whichever space the training data was collected.

Practical Requirements

Calibration: Joint-space control requires accurate joint encoders and proper homing/calibration. Encoder drift or missed index pulses cause systematic position errors that accumulate along the kinematic chain.

Normalization: When training neural policies, joint values should be normalized to a common range (typically [-1, 1]) across all joints. Different joints may have vastly different ranges (e.g., -180 to +180 degrees vs. 0 to 50mm for a prismatic joint), and unnormalized inputs can cause training instability.

Action representation: Policies can output absolute joint positions, joint position deltas, joint velocities, or joint torques. Absolute positions are most common for position-controlled arms (ALOHA, Koch, SO-100). Torque-space policies are used for force-controlled systems but are harder to train and require accurate dynamics models.

The Jacobian: Connecting Joint Space and Task Space

The Jacobian matrix J(q) is the mathematical bridge between joint space and task space. It maps joint velocities to end-effector velocities: v = J(q) * dq/dt, where v is the 6D end-effector velocity (3 linear + 3 angular) and dq/dt is the vector of joint velocities. The Jacobian depends on the current joint configuration q, meaning the mapping changes as the robot moves.

The Jacobian is central to: (1) inverse kinematics via the Jacobian pseudoinverse method; (2) impedance control via the Jacobian transpose for force mapping; (3) singularity analysis (the Jacobian becomes rank-deficient at singular configurations); and (4) manipulability analysis (the Jacobian's condition number or determinant indicates how well the robot can move in different directions from a given configuration).

For a 6-DOF robot, J is a 6x6 matrix. At singular configurations (arm fully extended, two rotation axes aligned), the Jacobian loses rank and the IK solution becomes ill-conditioned — small task-space motions require enormous joint velocities. For 7-DOF redundant robots, J is 6x7, and the null space (1-dimensional in this case) allows the robot to reconfigure its joints without changing end-effector pose, useful for avoiding joint limits and singularities.

Configuration Space for Different Robot Types

  • OpenArm 101 (6-DOF): 6-dimensional joint space. Each joint is revolute with range approximately -170 to +170 degrees. The configuration space is a 6D hyperrectangle. Typical joint-space policies output 6 joint angle targets plus a gripper command at 50 Hz.
  • DK1 (6-DOF + gripper): Similar to OpenArm but with different joint ranges and dynamics. Joint-space policies are directly interchangeable between arms of the same kinematic structure, but require retraining for different models due to dynamics differences.
  • Unitree G1 humanoid (30+ DOF): The full configuration space includes legs (12 DOF), arms (14 DOF), torso (2 DOF), and head (2 DOF). Whole-body control operates in this high-dimensional joint space, typically with hierarchical task decomposition to manage complexity.
  • Bimanual ALOHA (14 DOF): Two 6-DOF arms plus two grippers. Joint-space policies output a 14-dimensional action vector. The high dimensionality makes action chunking particularly important to reduce the decision-making frequency.

See Also

Key Papers

  • Lozano-Pérez, T. (1983). "Spatial Planning: A Configuration Space Approach." IEEE Transactions on Computers. The foundational paper introducing configuration space for motion planning.
  • Khatib, O. (1987). "A Unified Approach for Motion and Force Control of Robot Manipulators: The Operational Space Formulation." IEEE Journal of Robotics and Automation. Establishes the operational (task) space framework and its relationship to joint space.
  • Zhao, T. et al. (2023). "Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware." RSS 2023. ACT policy operating in joint space, demonstrating the effectiveness of joint-space imitation learning.

Related Terms

  • Inverse Kinematics — Maps task-space goals to joint-space configurations
  • Motion Planning — Searches for collision-free paths through joint space
  • Workspace Analysis — Characterizes which task-space poses are reachable from joint space
  • Action Chunking — Predicts sequences of joint-space actions in a single forward pass
  • Whole-Body Control — Optimizes over the full joint space of a humanoid or mobile manipulator

Apply This at SVRC

Silicon Valley Robotics Center's data collection pipelines record both joint-space and task-space trajectories simultaneously, giving your team flexibility to train policies in either representation. Our engineers handle joint calibration, normalization, and action-space selection for your specific robot and task.

Explore Data Services   Contact Us