Mobile robots operating in real-world, outdoor scenarios depend on dynamic scene understanding for detecting and avoiding obstacles, recognizing landmarks, acquiring models, and for detecting and tracking moving objects. Motion understanding has been an active research effort for more than a decade, searching for solutions to some of these problems; however, it still remains one of the more difficult and challenging areas of computer vision research. Qualitative Motion Understanding describes a qualitative approach to dynamic scene and motion analysis, called DRIVE (Dynamic Reasoning from Integrated Visual Evidence). The DRIVE system addresses the problems of (a) estimating the robot's egomotion, (b) reconstructing the observed 3-D scene structure; and (c) evaluating the motion of individual objects from a sequence of monocular images. The approach is based on the FOE (focus of expansion) concept, but it takes a somewhat unconventional route. The DRIVE system uses a qualitative scene model and a fuzzy focus of expansion to estimate robot motion from visual cues, to detect and track moving objects, and to construct and maintain a global dynamic reference model.