VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system that automatically derives the kinematic model of a machine from RGB video with an optional depth channel. Our system records a live session while a user performs all typical machine movements. It then searches for trajectories and converts them into linear, circular and helical constraints. Our system can also detect kinematic chains and coupled constraints, for example, when a crank moves a toothed rod.