Before an autonomous vehicle’s sophisticated AI can classify a single pixel or plan a trajectory, it must first perceive the world. This perception is not direct; it is entirely mediated by a suite of sensors translating physical phenomena—light, radio waves, sound, acceleration—into digital data. This translation layer is the most fundamental attack surface. If you can control what the vehicle “sees” or “feels,” you can control its reality, bypassing layers of algorithmic defenses before they ever have a chance to engage.
Sensor deception is the art of manipulating the physical environment to produce flawed digital representations within the target system. It’s an attack on the system’s input truth, turning the physical world into an adversarial domain.
Red Team Objective: Move beyond software-only exploits. Your goal is to demonstrate how physical-world manipulations can induce critical system failures, such as causing a vehicle to misinterpret its environment, ignore threats, or execute dangerous maneuvers based on fabricated sensor data.
The Sensor Suite: A Multi-Modal Attack Surface
A self-driving vehicle relies on sensor fusion—the combination of data from multiple sensor types—to build a robust model of its surroundings. While this provides redundancy, it also presents a multi-modal attack surface. An effective red team engagement requires understanding the unique vulnerabilities of each key sensor.
- Cameras (Vision): The primary sensor for interpreting semantic information like traffic signs, lane lines, and pedestrian intent.
- LiDAR (Light Detection and Ranging): Creates high-resolution 3D point clouds of the environment, crucial for object shape detection and localization.
- Radar (Radio Detection and Ranging): Excels at detecting object distance and velocity, even in adverse weather conditions where cameras and LiDAR may struggle.
- GPS/GNSS & IMU: The Global Navigation Satellite System provides absolute positioning, while the Inertial Measurement Unit tracks orientation and acceleration, forming the vehicle’s proprioceptive sense.
Core Deception Techniques
Sensor attacks generally fall into three categories, varying in sophistication and resource requirements.
1. Jamming (Denial of Service)
The simplest form of sensor deception is to overwhelm a sensor with noise, effectively blinding it. This is a denial-of-service attack on the physical layer. For example, a powerful infrared LED array can saturate a LiDAR sensor’s receiver, preventing it from ranging legitimate objects. Similarly, a radio transmitter broadcasting noise on the correct frequency can blind a radar sensor. The goal is to force the vehicle to rely on its other, potentially less-capable, sensors.
2. Spoofing (Data Injection)
Spoofing is more subtle and dangerous. Instead of just blinding a sensor, you feed it carefully crafted, malicious data that it accepts as legitimate. This requires understanding the sensor’s operating principles. A successful spoofing attack can create “ghost” objects—vehicles that aren’t there—or erase real objects from the vehicle’s perception.
For example, a LiDAR spoofer doesn’t just shine light; it listens for the target vehicle’s outgoing laser pulses and transmits its own synchronized pulses back, precisely timed to mimic the reflection from a non-existent object at a specific distance.
// --- Pseudocode: Basic LiDAR Spoofing Logic --- function spoof_lidar_point(victim_sensor, fake_distance): // 1. Detect an outgoing laser pulse from the victim's LiDAR. outgoing_pulse = victim_sensor.detect_pulse() if (!outgoing_pulse) return; // 2. Calculate the required delay to simulate the fake distance. // Time = (2 * Distance) / Speed of Light SPEED_OF_LIGHT = 299792458; // m/s time_delay = (2 * fake_distance) / SPEED_OF_LIGHT; // 3. Wait for that precise amount of time. wait(time_delay); // 4. Fire a laser pulse back at the sensor. fire_spoofing_laser(victim_sensor.direction); // The victim's sensor now registers a point at 'fake_distance'.
3. Physical Adversarial Objects
This technique exploits the subsequent AI models that interpret the sensor data. By creating specifically designed physical objects, patterns, or modifications, you can cause predictable misclassifications. The most well-known examples involve placing adversarial patches (stickers) on traffic signs to make a camera-based system misread a “Stop” sign as a “Speed Limit 80” sign. This attack vector blurs the line between sensor deception and perception stack attacks (covered in 9.1.2), as it targets the entire pipeline from photon to classification.
Sensor Vulnerability Matrix
As a red teamer, you must choose your attack vector based on your objective, resources, and the target system’s architecture. The following table summarizes common approaches.
| Sensor | Primary Function | Common Deception Methods | Attacker Complexity | Potential Impact |
|---|---|---|---|---|
| Camera (Vision) | Scene understanding, object ID | Adversarial patches, projectors, laser dazzling, IR flooding | Low to Medium | Misclassification, object invisibility, lane departure |
| LiDAR | 3D mapping, object detection | Point cloud spoofing, jamming, relay attacks | High | Ghost objects, object removal, incorrect distance |
| Radar | Object detection, velocity | Signal spoofing/jamming, corner reflectors | Medium to High | False vehicle detection, phantom braking, target masking |
| GPS/GNSS | Global positioning | Signal spoofing, jamming | Medium | Incorrect location, route deviation, map mismatch |
| IMU | Inertial measurement | Acoustic/vibrational resonance, physical manipulation | High | Unstable control, incorrect orientation, loss of localization |
Red Teaming Considerations
Testing for sensor deception vulnerabilities requires a different mindset and toolset than traditional cybersecurity. Your lab is the physical world.
- Hardware is Required: You cannot effectively simulate these attacks from a keyboard. Red teaming autonomous vehicles necessitates investment in hardware like software-defined radios (SDRs) for GPS/radar spoofing, synchronized laser/detector pairs for LiDAR attacks, and high-powered projectors.
- Physics is Your Exploit Primitive: Success depends on understanding optics, radio frequency theory, and signal processing. Your “payload” is a carefully crafted physical phenomenon.
- Sensor Fusion as a Defense (and Target): A common defense is to cross-validate sensor data. If the camera sees a car but LiDAR and radar do not, the camera data might be dismissed as a hallucination. Therefore, the most effective red team engagements often involve multi-modal attacks that deceive multiple sensors simultaneously to present a consistent, yet false, reality to the fusion engine. For example, spoofing a radar object while simultaneously projecting a faint image of a car for the vision system to classify.
By mastering the art of sensor deception, you challenge the very foundation of an autonomous system’s worldview. The data generated by these attacks will flow downstream, providing the perfect entry point for testing the resilience of the perception and decision-making stacks that follow.