01 LiDARLight Detection and Ranging. A sensor that measures distance by illuminating targets with laser light and measuring reflections. Essential for autonomous navigation and 3D mapping.02 Computer VisionThe field of enabling computers to interpret and understand visual information from cameras. Core technology for robot perception.03 Depth CameraA camera that captures distance information for each pixel, creating a 3D representation of the scene. Technologies include structured light, time-of-flight, and stereo vision.04 RGB-D CameraA sensor combining a regular color (RGB) camera with a depth sensor, providing both color and distance information. Examples: Intel RealSense, Microsoft Kinect.05 Time-of-Flight (ToF)A distance measuring technique that calculates depth by measuring the time light takes to travel to an object and back.06 Structured LightA 3D scanning technique that projects known patterns onto a scene and analyzes distortions to compute depth.07 Stereo VisionComputing depth from two cameras by matching corresponding points in left and right images, similar to human binocular vision.08 SLAMSimultaneous Localization and Mapping. Algorithms that build a map of an unknown environment while simultaneously tracking the robot's location within it.09 OdometryUsing motion sensor data to estimate position change over time. Subject to cumulative drift errors without correction.10 IMU (Inertial Measurement Unit)A sensor combining accelerometers and gyroscopes to measure acceleration and rotation. Used for robot orientation and motion tracking.11 EncoderA sensor that measures the position or speed of a rotating shaft. Essential for robot joint feedback. Types: incremental, absolute, optical, magnetic.12 Force/Torque SensorA sensor measuring forces and torques applied to it, typically mounted between a robot wrist and end effector. Enables force-controlled manipulation.13 Tactile SensorA sensor that detects physical contact, pressure, or texture. Enables robots to 'feel' objects during manipulation.14 Proximity SensorA sensor that detects the presence of nearby objects without physical contact. Technologies include infrared, ultrasonic, and capacitive.15 Object DetectionThe computer vision task of locating and classifying objects within an image. Foundation for robot perception systems.16 Semantic SegmentationClassifying each pixel in an image into predefined categories. Enables robots to understand scene structure.17 Point CloudA set of 3D points representing the external surface of objects, typically generated by LiDAR or depth cameras.18 ProprioceptionA robot's sense of its own body position and movement, derived from internal sensors like encoders and IMUs.19 ExteroceptionA robot's perception of the external environment through sensors like cameras, LiDAR, and proximity sensors.